No computer algorithms, no mass adjustments: instead teachers will be trusted to set all exam grades themselves.
Yet the simplicity of the message belies the complexity of the task – from the initial work deciding the grades to their "moderation" and how they can be appealed.
So what do the next few months have in store for teachers facing this unknown situation? Here are some thoughts on what we have in store for us between now and results day 2021:
GCSEs and A levels 2021: Potential problems with awarding grades
1. The exemplar material and guidance may be insufficient
By Easter, teachers have been promised help from the exam boards to assist in the assessment of student work as part of the teacher-assessed grades plan, in the form of mark exemplars and grade descriptors.
But how helpful will this assistance be?
After all, it is impossible to cater for every eventuality for every context. An exemplar of work created in an exam condition is not going to be comparable to exemplars written with an open book and a writing frame as homework.
In addition to this, the exemplars will have to "peg" the standard that we’re expecting to either the pre-pandemic performance of pre-2020 or the more ‘generous’ grading of 2020.
This is something that has caused confusion at the top of government, with education secretary Gavin Williamson saying first that there would be no way to peg results to past years: "We didn't feel as if it would be possible to peg to a certain year because, sadly, as a result of doing that it would probably entail the use of some form of algorithm."
He then later said the exam boards would set guidelines that were “broadly pegged to performance standards from previous years”.
This is an issue that Geoff Barton, general secretary of the Association of School and College Leaders, summed up neatly in his Tes column last week: “Are grades this year pegged to previous years or not?”
Until we know for sure, it will be hard for teachers to know against what level they are assessing work to inform grades.
2. Moderation will prove impossible
To understand why the proposal for grading this summer is such a difficult task for teachers, you have to understand how GCSEs are normally graded.
The exams are set by exam boards and there is an attempt to make every paper a comparable level of challenge to previous years.
Then all of the students sit those papers on the same day, in exam halls under the same conditions – everything identical. Next, before those exam papers are sent out to examiners, there has to be an agreement on how to apply the mark scheme.
So every individual exam board has a chief examiner for every paper and, together with a small group of other senior examiners, they mark a large sample and agree upon a set of standardised scripts that are used as exemplars of specific marks – not grades – and this is an important difference.
These scripts are then shared with the examiners, who are put in teams. They are then trained on how to mark this paper, and "calibrated" so all of their marking is brought in line with the standardised scripts.
And even after that, during the marking period, examiners are regularly checked so that their marking is consistent and in line with those standardised scripts.
Then, and only then, do all of the exam boards come together with Ofqual and agree the grade boundaries.
For this process, they use examples of exams that scored a grade 9 and a grade 4 from previous years, as well as pre-existing data about the cohort to inform the decision, including the National Reference Tests, which let us know if the cohort is exceptionally weak or strong compared with previous years.
This is all to ensure standards over time, and to keep that reliability of what a top and bottom grade is.
This year, teachers will have none of that at their disposal. There will be no anonymity, consistency, standardisation or cohort referencing.
So when people talk about grade inflation, it isn’t a matter of the teachers willfully inflating the grades, but rather the methods used to assess this year cannot be compared with previous years', leaving much of the work as a shot in the dark.
3. There may well be adjustments anyway
Last year the algorithm was crudely applied to all students, and, despite many warnings that it would not work, and the subsequent outrage when this proved correct, it took almost a week after A-level results day before a U-turn .
This year we had the promise of exams in the summer, even though it was looking increasingly unlikely that schools would avoid another extended lookdown, and we have ended up with teacher-assessed grades.
So we are perhaps entitled to ask: what U-turns might we see this year?
Well, instead of an algorithm, we will have random sampling and investigations if a centre enters grades that exam boards feel are “out of line” with previous performance.
But will this be enough? Not everyone thinks so. Sir Jon Coles resigned from his post of Department for Education Ofqual adviser, and said exams in 2021 will be another “terrible tangle”.
"In this case, the government is desperate not to be accused of having ‘an algorithm’ or of ‘exams by the back door’. Focusing on this, rather than the actual goal – how we are going to be fair to young people – risks an outcome in August much worse than last year’s," he wrote on Twitter.
So what might the Department for Education be preparing for? One clue might be in the fact that it has decided to go ahead with the National Reference Tests.
It might be possible that the data from these tests will be used to gauge what level of inflation will be tolerated, and used as a barometer to judge whether or not to intervene with the teacher grades.
The "trust teachers" narrative is likely to last as long as it serves a purpose, and the prioritisation of standards over time is likely to decide whether grades submitted take inflation to an intolerable amount.
4. Appeals will be complicated
One of the few pieces of good news to come out of the grading consultation was the confirmation that the proposal to run appeals through schools had been dropped.
However, with all of the issues with the inconsistency of the assessments themselves, will exam boards be much better placed to adjudicate?
Fair judgement relies upon robust exemplars and standardised materials – and, for the reasons previously outlined, they won’t be up to the job.
Yet no doubt many appeals will be lodged by parents and students unhappy with their outcomes.
This may put teachers and schools in a tricky position when they face being questioned by exam boards on why certain grades were awarded – and put a strain on school and teacher relationships with families.
Part of the problem, though, for teachers in setting grades – and those assessing appeals – is the difficulty in pinning down what the difference is between grade boundaries.
With the new specification introduced in 2017, we’ve made this job even harder by increasing the number of grades to award, so the difference between grades is even more challenging to quantify.
It’s the same problem you face whenever you rate something. If you simply have a pass/fail system, you can be more confident in issuing your decision. But marks U-9? What really is the difference between a 2 and a 3? Or a 7 and an 8?
Without clear, specific guidance, it's very hard to get it right and you are liable to be wrong in the eyes of someone else on something that is very hard to quantify when you lack the usual guidance and standardised exam papers.
Even in normal times, this is no panacea with one in four grades being "wrong", according to Ofqual, due to the grey area of marking tolerance, and this is an issue the Higher Education Policy Institute (HEPI) has been highlighting for many years.
Perhaps one bit of good news is that the government has worked in an extra three weeks for grade appeals – they're going to be needed.
What does all of this mean for the future of exams?
Far from this being the year when we reassess the purpose of exams, we’re instead setting ourselves up for another grading fiasco.
None of the changes brought in actually address a key concern teachers have had about the system – that it is designed to fail a third of all students every year.
Will this new flurry of assessment prepare students who have missed so much time in school for their next step? Unlikely. It will probably result in less teaching time, as teachers rush to set and mark assessments on time for the 18 June deadline.
Exams are a flawed method of assessment, but they’re probably the best method we’ve got. However, there is much more we could do to make them useful and fair.
Sadly, it feels this year we have not used the time wisely to consider more forward-thinking alternatives and instead created another model ripe for confusion and frustration for all involved.