From the Editor - Comparable outcomes? The cat's out of the bag
The social networking website BuzzFeed has an interesting approach to news. It takes an event and attaches a list, and often an animal - usually a cat. "23 People Who Will Kick Kim Jong Un's Ass If He Fires a Missile" or "16 Cats Interpret 16 Margaret Thatcher Quotes", for example.
The site hasn't yet turned its attention to our exams system, which is a pity, because it could do with the clarity of a list and a few gratuitous animal pictures. "22 Kittens Demonstrate What 'Comparable Outcomes' Means To Them" would cut through an awful lot of confusion. "Is It a D or a C? The 17 Dogs Who Have Literally No Idea and the 15 Cats Who Do" would be even better.
Last year was the first in two decades that exam grades didn't rise. Conspiracy theorists blame Michael Gove and his desire to end grade inflation and return to an academic "gold standard". It seems things are a little more complicated than that (see pages 24-28). Trying to explain why in a few sentences and without the aid of a cute animal is difficult. But here goes.
In short, although the government welcomed the halt in grade inflation, the beginning of the end can be traced back to 2001. Regulators, concerned that grades in the reformed A level would dip because it was a new exam, benchmarked the results to previous tests sat by the same student cohort.
Grades for the whole cohort - not, it must be stressed, individuals - were adjusted to reflect the performance of previous years and to iron out any unfair wrinkles. Over time, "comparable outcomes", as it is known, became common practice and was used to peg the results of every exam to prior attainment. Final grades were a trade-off between two things: student performance in any given year, and an adjustment to ensure that those results weren't out of kilter with previous years and did not undermine exam credibility.
Unfortunately, as critics warned before the 2010 general election, the balance gradually began to shift: more weight was attached to statistical history and less to current performance. Why? Probably because the certainty of a statistical model was a comforting refuge for regulators buffeted by journalists raging about grade inflation.
Now the orthodoxy of comparable outcomes is entrenched to such an extent that even the puritans at Ofqual are worried. How, they and others wonder, can exam results reflect student improvement if grades are, to all intents and purposes, fixed?
Well, clearly they can't. If there is a significant increase in achievement the exam system as currently arranged will be blind to it. This is bad news for students, whose efforts will go unrecognised, and schools, whose capped results will not keep pace with ever-rising floor targets.
But spare a thought, dear reader, for our politicians. How are they supposed to show that, under their tutelage, the system has improved if there is no way of demonstrating improvement? These high-profile victims could finally force the authorities to act. Or as BuzzFeed would put it: "Poodles Turn Out To Be Ferrets: Ofqual Weighs In."