This year is the first year that an awarding organisation has provided free access to students’ GCSE and A-level maths exam scripts online. Consequently, as the dust settles after GCSE results day, many teachers will be poring over the scripts of their erstwhile Year 11s, looking for marking errors that may boost a student over the boundary and into a higher grade. Although this is completely understandable, given the accountability structures teachers operate in, I have to wonder: do maths exams with such objective mark schemes result in an assessment that benefits students? Wouldn’t it be better if we started to assess, at least in part, through more open questions?
Hear me out: of course, when the stakes are as high as they are for GCSE results, maths teachers can take comfort in the fact that reviews of maths papers very rarely result in changes. Even when they do, it’s rarely by more than one or two marks.
But do our mark schemes provide reliability at the expense of validity? Rather than being a comfort, perhaps this "reliability" is actually a straightjacket that narrows students’ experience of mathematics and contributes to their inability to use the maths they learn at school once they leave.
Modelling the real world
Using mathematics to model the real world is a fundamental part of a well-rounded maths education, and yet this is something that is largely ignored by our maths exams. Because of the demands of "reliable" mark schemes, any question related to practical application is tightly constrained. Students are unable to make assumptions for themselves and so must rely on combining the variables delivered by the question and drawing on their experience of similar examples from past papers in order to find a solution.
On the other hand, a "Fermi estimate"-style question, which asks students something like “How many cars are there in a small town?”, would be impossible to write an objective mark scheme for. In a question such as this, the sophistication of the approach is more important than any actual numbers used. Students would have to lay out the assumptions they make, select a mathematical approach and then come up with a ballpark figure. In the mark scheme, the quality of the assumptions, the sophistication of the techniques selected and the accuracy of any calculations could all be given credit. This would be open to much more subjective judgement from the examiner, albeit within a clearly defined framework.
'A sea change in attitude'
If GCSE exams contained a proportion of questions with more subjective mark schemes, it would, of course, make marking quantifiably less reliable. But in a world where ever-increasing pressure for results means that "what you test is what you teach", it would give teachers scope to get students to engage in mathematical exploration as genuine exam preparation.
What’s more, it would encourage teachers to apply common numerical quantities to real-world situations – as in the example above, where having some understanding of the populations of towns and cities could help to improve the quality of any assumptions.
Of course, this would require a sea change in attitude from ministers, commentators and many educators, but a new approach to exam questions might just be one piece of the puzzle that helps us create more successful, more engaged mathematicians in the future.
Darren Macey is framework developer for Cambridge Mathematics and a former secondary maths teacher.