So the week has arrived when all of the hard work and trials that went into the Sats and the accompanying Teacher Assessment fiasco last summer gets turned into league tables for a public audience. Quite who that audience is, I’m not sure. I’ve known relatively few people outside of teaching show any interest in the data at all, but I guess there must be someone somewhere who’s gripped by it all. It all seems a bit of a nonsense to me.
Don’t mistake me for one of those anti-Sats, anti-test, anti-academic sorts. I’m quite happy with the principle that tests can give an indication of how a school is performing. I’m just not quite ready to hand over all responsibility for such things to the number crunchers, for the problem with number-crunching is that it works best with lots of numbers.
I always used to struggle with some of the simplistic activities that we put children through when teaching the various averages in key stage 2 – let’s just take a moment to celebrate the demise of the median and mode from the Year 6 national curriculum. Inevitably, to teach the methods of calculation, we’d end up asking banal questions, such as what the average shoe size was in a class.
It’s entirely useless to know the average shoe size of any population. In practice, people need the right size shoes for their feet. Knowing that a population has an average shoe size of 6 (median), 6 (mode) and 6.13 (mean) doesn’t help Clarkes decide how many pairs of size-13 wellingtons to stock each year.
With smaller populations, it makes even less sense. No family is interested in the average height of its members. We buy clothes for the individual.
Yet we persist in this game of attempting to compare schools based on tiny metrics. One school in town had 84 per cent of children achieve the demanded standard in reading. One in the next village had only 75 per cent. So what? That doesn’t show the fact that the village school had four pupils, one of whom arrived after six months with no English. Whom is such data supposed to help?
The same nonsense occurs with data for vulnerable groups. Perhaps there were 90 children in the cohort and we can begin to make use of the data at that level, but if only six were eligible for pupil premium, does it really make sense to try to compare them to the national cohort of hundreds of thousands?
Yet six is the magic number, according to the Department for Education. If you have only five pupils in any group, the results are hidden away out of view. Reach the magic sixth child and suddenly every idiosyncrasy is expected to be ironed out. Suddenly, one child means the difference between being above or below the average. Five out of six children hitting the golden mark: your local authority will send everyone your way to find out how it’s done. Only manage four over the threshold and it’s you who gets packed off to learn from someone else.
And to top it all, we set future targets based on this charade. This year, your 10 pupil-premium children didn’t do as well as the other 500,000 in the country – what will you do next year to correct the error of your ways? No matter if the following year you’ve got only three eligible children, all of whom are high fliers, an action plan must be produced all the same.
My advice? Save yourself the bother and look instead at the other end of the data journey: what you will do to make this year’s pupils achieve as well as they can before their successes get turned into numbers for the crunchers.
Michael Tidd is headteacher at Medmerry Primary School in West Sussex. He tweets @MichaelT1979