This year’s A-level and GCSE candidates did not spend any time sitting at exam desks taking national public exams under controlled conditions. However, they did work hard, often in difficult circumstances, and they will be awarded grades this summer. Their teachers also worked hard to support the process that made these grades possible.
England’s qualification system is excessively dependent on terminal, high-stakes, age-defined, externally set exams and there is a strong case for reform. But that’s a debate for another day. The question right now is: with all the changes that had to be made, will students get the grades they worked for? The grades they deserve? The "right" grades?
Everyone wants fairness and within the terms of our current system, the goal of fairness was central to this year’s approach to grading. It’s worth remembering what a challenge this is. GCSE and A-level assessment is essentially based on standardised written exams that are marked in a common way to generate exam scores and then grades. This means that candidates who sit these exams are graded and ranked against agreed assessment objectives using common criteria and leading to roughly the same grade distribution from year to year.
The difficulty of calculating A-level results
Crucial to this grading process is the exam score; the very thing that is not available this year. Without this key information, the system was tasked at very short notice with trying to replicate what happens in a "normal" year. The approach that was developed drew instead on centre-assessment grades and rankings as well as an understanding of the performance of previous students, nationally and by centre.
From what we’ve seen in colleges, centre-assessment grades were very carefully produced, based on what teachers know about their students and the results of college-wide standardised assessments, including mocks as one piece of evidence. These were then subjected to rigorous internal challenge and moderation and are not subjective opinions. If they sometimes give the "benefit of the doubt" to students who were close to a grade boundary, that will have been carefully considered.
We also know that asking staff to rank their students was problematic. Labelling students with what felt like a fixed-ability tag doesn’t come naturally in a sector which is all about aspiration and progress rather than selection or closing off options. However, teachers recognised that they were being asked to mirror the ranking that a particular exam might have generated at a particular moment in time, and they took this process very seriously.
The exam system, like the education system itself, reflects the pervasive and entrenched inequalities in our society. These inequalities take many forms, some more visible than others; and challenging and dismantling them will take determined and sustained effort over time. For this summer at least, we need to be convinced that the grading process has not deepened any of those inequalities and Ofqual has said that "there will generally be no widening of the gaps in attainments between different groups of students".
Once colleges and schools had done their bit, it was down to Ofqual and the awarding organisations to make those adjustments, nationally and by centre, which ensure that the grades feel similar to previous years overall and reflect how students have done historically in each centre based on their prior performance. This requires a statistical model, in this case the chosen one was a "direct centre-level performance approach’.
Until results day, students in England won’t know what grades they’ve been awarded in any subject. But we do know that most students will get the grades their centre predicted for them and that any adjustment will generally be no more than one grade either way. We also know that the overall pass rates for both GCSE and A level will be up a little on last year.
Once students have their results, they will need to make some decisions about the options available to them, including whether to take any autumn exams or to appeal based on mock results. We will know whether the grade profile, value-added and performance of different groups of students are broadly similar to those of recent years. We will also need to ask whether any candidates have been disadvantaged by the context or history of their centre. And, of course, we will all be asking: was this fair? There are always some students who feel their final grades don’t reflect their ability. The litmus test for this year will be whether there are many more than usual.
The truth is that there is no failsafe system for establishing an objective "correct" outcome for every student and the complex business of evidencing learning can’t easily be summed up by simple metrics. The language of assessment is expressed in instruments, judgements, estimates, approximations, margins of error and confidence intervals. That’s why no one should ever be defined or limited by one set of results.
This has been a difficult and extraordinary process and while there will be many questions about how it went, we need to find time over the next two weeks to celebrate the achievements of the amazing class of 2020 and to focus on their potential and future progression.
Eddie Playfair is a senior policy manager at the Association of Colleges