Schools have known since last summer that something went badly wrong with the 2015 Cambridge IGCSE qualification in English language. The Association of School and College Leaders estimates that several thousand candidates received suspect grades.
Last week, two independent-schools groups – the Headmasters' and Headmistresses' Conference and the Girls’ School Association – published an enquiry into the pattern of results awarded to around 30 per cent of the candidates entered for one of the versions of this IGCSE.
What did this show? That top-end grades for this subject were wildly out of kilter with the achievement of the same candidates in comparable subjects.
The director of assessment at the exam board Cambridge International has said that if any students are unhappy with their English-language grade last summer – and we can assure him there are plenty, numbering in the thousands – “that’s unfortunate, and I can empathise with those students”. So can their teachers, in spades, but we’ll come back to that later.
In a strange way, it’s also possible to empathise with the plight of the board and the exam regulator, Ofqual. When things like this go wrong, it seems that all they have to fall back on are their spreadsheets, algorithms and statistical software. Buttons were punched again last week, both in the board and at Ofqual. Fresh printouts were generated. And, entirely predictably, the grades awarded were declared in wearyingly familiar terms to have been “appropriate”, “robust” and, most underwhelming of all, “suitable”.
The problem is that things go wrong with exams. Often quite small things. But if several small mistakes occur in combination, the design of an entire qualification can be corrupted. When many hundreds of experienced teachers know that something like this has happened, no number of scattergraphs or bar charts re-run by the high command is going to restore their trust and confidence.
The examinations industry urgently needs to find new methods of detecting injustice at the level of the individual candidate. Rigorous use of cohort-level statistics was important in bringing an end to the grade inflation of the 1990s and 2000s so that standards could be reset. But the pressing challenge now is restoring professional confidence in the fairness of individual results. Using the same statistical tools to tackle this very different problem won't succeed in fixing it.
An entirely new approach is needed if the trust of teachers is to be restored. In particular, when specific things go wrong, the investigation that follows needs to be seen to have been self-evidently fair and convincing in the case of each candidate affected. Time and again, the default is for the boards and the regulator to close ranks when there are “little local difficulties”. This is neither appropriate nor robust.
So what is to be done for last summer’s wronged candidates, many of whom are starting to think seriously about their next step after leaving school?
I have bad news for vice-chancellors and university admissions tutors. We are going to have explain something in the Ucas references of each of the affected students. This is that, collectively, our schools do not have confidence in English-language grade included in their application if this was taken with the Cambridge International board in 2015.
Let’s hope that this is a unique case. The last thing admissions tutors want to hear – and the last thing we wish to tell them – is that some school exam grades are not to be trusted.
Meanwhile, there is good news for this year’s candidates sitting the equivalent exam next month. The safest time to fly is when that same class of aeroplane has already been thoroughly re-inspected following a recent crash.
Chris King is head of Leicester Grammar School and chairman of the Headmasters' and Headmistresses' Conference