Parents have, of course, every right to the information in these tables, and more. But just how useful the present annual exercise is remains questionable. The unfairness of the tables' failure to compare like with like would be more justifiable if they were demonstrably effective. "No hiding place for failure" is the flavour-of-the-month phrase in Government circles. Yet the current raw-score league tables continue to camouflage underachievement among mediocre schools with favoured intakes.
The tables have acted as a spur to increased efforts in some schools. This effect is clearly not general, however, since the percentage of pupils obtaining five GCSE A* to C grades has increased by less than 2 per cent in four years. The proportion achieving at least one A* to G remains the same as in 1994 - and there is little sign that the worst-performing schools have closed the gap on the best since 1992.
But the tables may have had more impact on teachers than on the consumers they were originally meant for. Many parents have difficulty understanding them, or have other priorities when selecting a school.
Whether the league tables - or any of the recent reforms - have had much to do with such improvements as there have been is uncertain. They could equally be the result of rising expectations among an increasingly middle-class, white-collar population. Nobody seems to have worked out what sort of a rise in exam success could be expected anyway as a result of demographic change. So attributing improvement to any particular cause - league tables, Office for Standards in Education inspections, easier exams, grant-maintained status or sunspot activity - can be no more than speculation.
Nor are the performance tables as they stand much use to those whose job it is to hold schools and teachers to account. Governors, heads, local authorities - even inspectors - have had no systematic way of judging whether performance is above or below that which could reasonably be expected.
This may, however, be about to change. Schools are promised some sort of "value added" information from the Qualifications and Curriculum Authority in relation to all four key stages before the end of the year, and a pilot scheme in 1998 will report exam results in a form which takes account of pupils' prior attainment. This will inevitably demand much work in data-matching between exam and national curriculum test results. It will also add considerably to the complexity of the published outcomes. Some statisticians are warning that once intakes are allowed for, the achievements of most schools will become indistinguishable from those of the others.
In addition, schools are soon to be provided with a set of "benchmarks" to enable them to evaluate their own 1997 results. Next year, they will be asked to use 1998 benchmarks to set compulsory performance "targets" for 2000. These will take account of children for whom English is an additional language and the percentage taking free school meals.
Meanwhile, as though in a parallel universe, OFSTED is planning to send every school early next year an annual report on its performance, showing how inspectors rate the school in comparison with others. Whether this comparison will draw on the same benchmarks as the QCA's remains to be seen.
What seems beyond doubt, however, is that performance reporting is here to stay. And if the raw tables are to continue - amid this blizzard of statistics - it will not be for their intrinsic contribution to information or accountability. Their key purpose, and that of the media circus that accompanies them, is to underscore the point that outcomes - demonstrable pupil achievements - are what matter. Good intentions are no longer enough.