‘Individual exam results do not tell a school’s story’

Exam results can measure an individual’s progress, but beware the abuse of aggregation when it comes to measuring a whole school
21st September 2018, 4:00pm

Share

‘Individual exam results do not tell a school’s story’

https://www.tes.com/magazine/archive/individual-exam-results-do-not-tell-schools-story
Thumbnail

Like panopticons, high-stakes exams afford several views from the same position: in this case, not just of a student’s capabilities, but also a school’s or a teacher’s impact. They even allow sight of the performance of the whole cohort, and thus the effectiveness of the nation’s education system.

It might be assumed that the greater the degree of aggregation and generalisation, the less secure one can be in interpreting the data. However, the only statistically safe use of a summer’s worth of exam results is in the totality of the cohort’s performances. Even exam boards accept that an individual’s grade has to be hedged around with pretty widely-set confidence limits. While an individual outcome might be uncertain, aggregating those outcomes produce a grade distribution that should prove statistically quite reliable. Helen may or may not have gained an A* this year, but roughly the right group of candidates will have achieved the top grade.

It is tempting to use aggregated exam results to measure the effectiveness of individual schools. But the problem with headline exam results is that they tell us next to nothing about the impact schools have on student performance. A selective school should have little difficulty topping the tables based on the percentage achieving top grades, but then the students were admitted on the basis that they were destined for those grades.

Less selective schools rightly resent the implication that more modest headline results reflect a mediocre educational input. So they work hard to convince pupils and parents that the thing to look at is the value added from one set of exams to the next. To do this without appearing defensive, and to do it every year with a new cohort, is a task worthy of Sisyphus.

But using value-added measures rather than raw results doesn’t give as much clarity as we might hope. A valid question would be, what value does value-added add?

Much depends on the benchmark chosen. Given that the benchmark is itself (to a greater or lesser extent) a measure of prior education and attainment, the rate and magnitude of further progress remains reliant on earlier inputs (not to mention the impact of other, non-school factors, such as home environment).

Measuring the progress made in the two years of the sixth form represents an especially acute challenge. The usual procedure is to take GCSE as the starting point, and then work out whether the grades achieved at A level are in line with the typical achievement of students who started with the same GCSE profile.

But this assumes that the GCSE profile constitutes a level playing field between schools. A school that successfully squeezes every last drop of value out of the GCSE exams for its students is in a sense creating a problem for itself because “over achievement” at GCSE reduces the headroom available for further value-added to A level.

It seems reasonable to use exam results as a measure of an individual student’s attainment. But we need to be careful when we scale up and use any of these metrics to judge schools or systems rather than students. Beware the abuse of aggregation.

Kevin Stannard is director of innovation and learning at the Girls’ Day School Trust. He tweets @KevinStannard1

Want to keep reading for free?

Register with Tes and you can read two free articles every month plus you'll have access to our range of award-winning newsletters.

Keep reading for just £1 per month

You've reached your limit of free articles this month. Subscribe for £1 per month for three months and get:

  • Unlimited access to all Tes magazine content
  • Exclusive subscriber-only stories
  • Award-winning email newsletters
Recent
Most read
Most shared