How multiple-choice questions boost student assessment
Many in teaching view multiple-choice quizzes as little more than a starter activity but, when Elena Russell introduced this style of questioning across her English department to formally test students, she found it to be a rigorous way of checking understanding – with the added bonus of reducing staff workload
Here’s a quick quiz for you. Please complete the following sentence: Multiple-choice questions are … a) pointless; b) fine for fun quizzes but little else; c) primary school stuff; d) none of the above. If you picked “d”, secondary teacher Elena Russell would agree with you.
Inspired by common practice in the US, she decided to bring the multiple-choice question to her English classroom as a way to improve assessment practices. She explains how the approach works.
Why did you need to change how you were assessing students in English?
We wanted to be able to identify clear areas for intervention, and we felt the existing model of assessment – in which one essay was designed to sum up everything pupils knew about a topic – wasn’t a good enough representation of their knowledge. We realised we needed a blended approach that incorporated assessment of core knowledge and application of skills.
Multiple-choice questions were the solution you settled on. This isn’t an approach you see much in secondary English classrooms, is it?
Not really, no. In my experience, multiple-choice questions (MCQs) are mainly used for quite basic recall practice. There appears to be a widespread misconception that they are an easier route of questioning, mainly because students don’t have to pull an answer from thin air – and so aren’t a reliable measure of students’ knowledge and skills.
I’ve often seen them used to support lower-attaining students, as they can help to combat the “fear of the blank page” but, beyond that, they’re rarely used as anything more than a starter quiz, and certainly not for summative assessments.
What’s the evidence to support wider use of MCQs?
There’s a lot of research behind the benefits of MCQs for formal assessment. In the US, they are widely used to test knowledge and aptitude in assessments such as the SAT, and they are also used in higher education settings. Having sat a multiple-choice exam paper myself when I was studying psychology at university, I have never felt that these questions were the easy option. In fact, I believe this style of questioning can rigorously test how well a candidate can apply knowledge to different contexts.
I’ve also been inspired by education adviser Mary Myatt’s talks on curriculum and the benefits of high-challenge, low-threat learning. MCQs align perfectly with this theory: they are a way to challenge our students while minimising threat.
What approach to MCQs did you decide to use and how did you implement it?
The approach we use now is a blended method of assessment, including more traditional written questions and MCQ tests.
We’ve created end-of-topic MCQ assessments for all year groups, each made up of 40 questions. We began with 60 questions, with the idea of each question taking roughly a minute to complete, but we found that a 60-minute lesson was not enough time for the students to complete the test without “panic ticks”, so this was reduced to 40.
Creating the questions took a bit of time initially, as there needs to be consistency in the level of challenge and questions need to be clearly mapped to the content. But now that the questions have been created, they will stand the test of time with only minimal tweaking required.
How are the tests marked?
We trialled a variety of methods. We began with self-assessment, with students marking their own papers (this meant we could often tell simply by the reaction in the room if there was a particular question a lot of them hadn’t got right).
We also trialled using scanning technology, where students filled in an answer paper, it was scanned and results were recorded immediately. This was an absolute winner for staff workload. However, as ever with technology, there was little room for error and, if answers weren’t perfectly recorded, some students ended up having to rewrite their answers or teachers had to copy them over. Not ideal.
We’ve found the good old tick-and-flick to be most effective overall. As an English teacher, it’s the easiest marking I’ve ever done and it’s really helped us, as a team, to create some incredibly purposeful feedback sessions.
Beyond easier marking, what have the advantages been?
The great advantage is the easy analysis of results: working this way means we can give students a detailed breakdown of their performance.
For example, the topic of A Christmas Carol is broken down into context, key characters, plot and quotation analysis. Students answer MCQs for each of these areas. They are given a percentage score for each area as well as an overall result. These scores help to focus their revision as it makes it clear where their weaknesses lie.
These results are also easy for parents to understand, which is a big win. The more detailed analysis has led to more informed conversations with parents and focusing on key topics for revision.
Overall, there’s definitely been an increased emphasis on knowledge and retention of information. The non-MCQ written components of the assessment model have also improved because the students have more to write about and because their knowledge is being more rigorously checked.
What advice would you offer to staff looking to implement something similar?
If you are looking to move to an MCQ-based model, make sure you work collaboratively on creating the assessments: share the work with other members of your department and see what is already out there. Online intervention packages, where there are community banks of questions, can be a lifesaver for planning.
When it comes to introducing the approach to students: train, train, train. They will need lots of practice with answering this type of question, especially if this is not something they’re used to. Encourage them to read the question carefully and not to be complacent about their odds (which can lead them to have a lazy guess every now and again).
Similarly, staff will need training on how to write an effective MCQ. A really good MCQ assessment is like a work of art. The wrong answers, or distractors, need to be carefully chosen so that any of them could be right (using a joke or a freebie distractor that is clearly wrong reduces the challenge and is a wasted opportunity to check the specific knowledge required).
Finally, try a variety of marking methods and see which works best for you. See what technology is out there and, above all, enjoy a new way of assessment.
Elena Russell is a lead practitioner for English at Leeds City Academy
This article originally appeared in the 11 June 2021 issue under the headline “How I...Used multiple-choice to boost assessment”