Skip to main content

Can we make exam marking more reliable?

As the exam season gets underway, Daisy Christodoulou asks whether our marking processes can be improved

exam marking

As the exam season gets underway, Daisy Christodoulou asks whether our marking processes can be improved

As the exam season rolls around again, many teachers will get used to fielding questions from students about the way exams are marked.

I can remember being bombarded with questions like "Who will mark my exam?", "How do I know the marker doesn’t know me?", "What if they just don’t like the story I’ve written?" and "Will they mark me based on my handwriting?"  

Many teachers will themselves have a related set of questions around the extent to which they can rely on the marks and grades from the exam, or from any mocks.

The aspect of marking that students and the wider public often focus on is that of marker unreliability. For subjects like English language, which are assessed with open questions that require extended written responses, this is an important factor. The same script might get a different mark from different markers. Mark scheme phrases like “use a range of vocabulary and sentence structures for clarity, purpose and effect” are hard to interpret consistently.

Can we eliminate unreliability?

If exams were made up entirely of multiple-choice questions that were marked by machines, we could dramatically reduce this kind of unreliability. However, even those of us who like multiple-choice questions may not feel confident to rely on these questions alone. Ultimately, most of us are probably happy to trade-off some reliability if it means students can develop their thoughts in an extended piece of prose.

And even if we were to totally eliminate marker unreliability, we would still have to contend with two other sources of unreliability: the test itself and the students taking the test.

The particular selection of questions on a test will have an impact on how well students do. Suppose a history exam is assessing Germany from 1850-1950, and suppose a particular candidate only manages to revise the Second World War. If the main essay question on the paper is about that, they may do quite well. If it’s about Bismarck, they may do less well. Again, there are ways to mitigate this problem, and again, doing so has trade-offs. Asking more questions means we rely less on a student getting lucky or unlucky. However, the more questions you ask, the longer the assessment, and the greater the time and expense of marking it.

Preparation is key

Then, you have the students themselves. Have they had a good night’s sleep and eaten breakfast? Did they read the question properly? Did something happen at home or on the way to the exam hall that might disturb them? This can obviously affect how well they do on the test.

The unreliability of exam marking can be frightening, given how much depends on it, and we should always be looking for ways we can improve. But it is important to remember that some of the unreliability is within a student’s control: good preparation and revision will make a difference. An element of uncertainty will always remain, but this is true of any project we undertake. So the best advice for students might be the same as we’d give for other big steps: prepare as well as possible, and accept that some things are out of our control.

Daisy Christodoulou is director of education at No More Marking and the author of Making Good Progress? and Seven Myths About Education. She tweets @daisychristo

Log in or register for FREE to continue reading.

It only takes a moment and you'll get access to more news, plus courses, jobs and teaching resources tailored to you