Schools should be judged by their results over a period of at least five years rather than “condemned” on the basis of just one year’s performance, according to one of country’s biggest exam boards.
Researchers at Cambridge Assessment believe that a study into the volatility of exam results could have wider implications on how schools are held to account in future.
The document states that teaching, exam reform and marking are often blamed when schools suffer “wild swings” in results. But researchers Tom Bramley and Tom Benton found that once the areas that impact the reliability of marking were removed, there was still “significant” volatility in the system.
“When swings occur they could be because of what is happening in the school or the children’s lives, they could be to do with the assessment itself or the way that national standards are applied, or to do with teaching and learning,” Mr Bramley said.
“But what our study shows is that when we’ve taken account of the variations which can be attributed to quality of marking and to the location of grade boundaries, surprisingly high levels of year-on-year volatility in exam results remain.”
The results of the study led Tim Oates, group director of assessment research and development at Cambridge Assessment, to suggest schools be judged by their results over a period of “at least” five years.
“It appears that underlying school-level volatility may be an enduring and persistent feature of education arrangements, which means that school performance – in terms of exam results – should be judged on a five-year picture rather than one-off annual drops or increases,” he said.
Mr Oates described the findings as “very important”, adding that the research “challenges many assumptions, with implications for the approach to accountability and for accountability measurements”.
The decision to conduct the research came after a report by the HMC elite group of independent schools found that "unexplained and very large vatiations" in grades were a "serious concern".
The Cambridge Assessment researchers therefore decided to investigate the extent to which volatility in exam results was caused either by the quality of marking or by the way in which grade boundaries have been set.
The researchers studied the results achieved by 146 schools, between 2008 and 2013. Specifically, they looked at results in GCSE maths and history. These two subjects were chosen as maths is renowned for an extremely reliable marking system, whereas history – like all subjects requiring extended answers – is widely deemed to be less reliable.
“Even in a reliably-marked subject like maths, there is still considerable fluctuation at school level from one year to the next, even in schools with relatively large and stable entries,” the researchers said.
History results showed even an even wider degree of variability of results.
Fluctuation of exam results has been a key concern recently. Last summer, teachers have warned pupils that changes to GCSE and A-level exams this year may lead to particularly volatile results. The exams regulator Ofqual was sufficiently concerned about the impact of changes to the exam system that it revised its rules about which results would be included in school league tables.
However, the Cambridge Assessment researchers found that there was still a significant amount of volatility, even when the exam system remained unchanged and marking was accurate. This was true no matter where the grade boundaries were placed.
The researchers therefore concluded that, on an individual school level, there was significant volatility in the system.
“Even if marking is accurate, and even if we deliberately choose grade boundaries purely to minimise volatility…volatility in schools’ results would remain,” they said. Indeed, more than a fifth of schools would still experience levels of volatility that the HMC regards as a serious concern.
“What is clear is that volatility alone cannot be taken to imply that either marking or setting of grade boundaries has been performed incorrectly.”