I often wonder how newspaper readers (or, at any rate, those assiduous enough to read more than one newspaper) manage to make sense of the world. On Thursday last week, the Independent headlined: "Summer reading schools failed to raise standards." On average, it reported, the summer literacy schemes had made no difference to achievement, though they had improved pupil attitudes.
The following morning, the headline in the Daily Mail was: "Traditional teaching spells success in English." According to this report, the children had made dramatic improvements: indeed, a third had boosted their reading ability by more than a year in just three weeks.
Since "many" teachers on the literacy scheme had used traditional methods, this was yet another blow against trendy methods. Indeed, John Marks, the right-wing educationist, said that teachers sticking to the trendy methods were guilty of "a form of child abuse". I wasn't quite sure what this implied - perhaps that all copies of the Plowden Report and works by Professor Ted Wragg should be seized under the child pornography laws.
But I digress. Why the two completely differing accounts? The answer, of course, is that there were two studies. One, by the National Foundation for Educational Research, tested children towards the end of the summer term, and again in September. This found no improvement. The other study, by Education Extra, which was running the summer schools, tested the children at the beginning of the three-week courses, and again at the end. This found the dramatic improvements.
We may reach several conclusions. First, as the Hawthorne Effect states, all experiments work. Second, as Hawthorne also states, all experiments eventually fail. Third, the results of an observation depend on who is doing the observing. We may call this the de Gruchy Effect because, just as people running summer schools are bound to find dramatic improvements, so any survey by a teachers' union will always show teachers worse off, on any measure, than they were five years ago. Fourth, anything involving figures is very tricky, as stated by Disraeli in his famous "lies, damned lies and statistics".
Which brings me to the league tables of exam results, also published last week. Here is a set of figures that becomes more tricky each year. We can now see GCSE results not just for last summer, but for the three previous summers. No doubt some parents will study the list of "most improved" schools and move house so that their children can enrol.
More fool them, I say. All figures are suspect, and official ones most of all. Someone once discovered that a British balance-of-payments crisis, which had persuaded a Chancellor to introduce a severe economic squeeze, was not a crisis at all - the Treasury had forgotten to count a sizeable proportion of our exports. Likewise, in the United States, some economists reckon that the inflation rate has been over-estimated for 25 years, causing the government to overpay pensioners and other welfare recipients by billions of dollars. If figures like these cannot be trusted, why should anybody heed the school exam tables?
I am not impressed by demands for value-added measures of performance. The Observer had a bash last Sunday, assessing exam results against such factors as numbers of children qualifying for free school meals. This had the satisfyingly perverse effect of placing schools from the much-maligned borough of Hackney in second and fourth positions. But I would have had more confidence in the exercise if the Observer had given me a clue about how it calculated the value-added scores.
An alternative form of value-added would take the attainment of children on entry to school and then measure their progress. Its advocates argue that this would make the effects of home background irrelevant. But don't poverty and deprivation hinder progress, as well as attainment on entry to school? And how are the adapted league tables to take account of children who switch schools?
No, I prefer the league tables we have. They are described as "crude" but that is precisely the point. At least, everyone can see that they are absurdly unfair; the alternatives would be unfair in different ways but, because the methodology would be more sophisticated, they would acquire a spurious authority. I suspect that private and grammar schools and comprehensives in middle-class areas will still find ways of coming top. Even if they don't, will it make any difference? Will City brokers, BBC executives and New Labour advisers be sending their children to the two Hackney schools that starred in the Observer's league table? I shall believe it when I see it.
For all their faults, league tables have performed an important function. They have focused schools' attention on the centrality of learning. We have heard just a little less of all that waffle about schools as social agencies, trying to repair broken communities and bring peace and love to high-rise estates.
People have stopped trying to construct a working-class curriculum of allotments, ferrets and football. Demands for road safety studies, anti-racist studies, entrepreneurial studies, happy family studies, death studies, and all the other ingenious ideas for keeping teachers occupied 365 days a year, are more easily resisted. Schools now know that press, public and parents will judge them on a set of clearly-defined results. The judgments may be grossly unfair, but schools at least understand where their priorities lie.
We should leave it at that. I know that schools can boost their positions in the league tables by covert selection of pupils. This is an argument for ending such selection, and changing the admissions system, not for wasting everyone's time with more form-filling, data collection and decimal points. If children from poor homes are doing badly at school, let us give them the smallest classes and the very best teachers. Let us give schools incentives, or even compel them, to recruit the deprived and the under-achieving. But let us not, for Heaven's sake, devise ways of concealing the gross inequalities in our society.
The idea that, if we calibrate our instruments more finely, we can somehow achieve perfect measurement of quality is a modern illusion, created by computers and statisticians who make a living out of it. Auden put it well: Out of the air a voice without a face Proved by statistics that some cause was just In tones as dry and level as the place. Better to publish figures that everyone knows tell only part of the story than to invent new ones that purport to tell the whole.