In addition to publishing 11-year-olds' test scores, the Government has included a value-added measure reflecting the distance that children have travelled since the age of seven. It is a positive step, but the powerful group of educational bodies that published a thoughtful critique of testing and tables this week were right to note the deleterious effects of using key stage 2 results as the principal benchmark of school performance. The consequences are well-known but worth reiterating. In fact they should be chanted: teaching to tests, narrowing of curriculum, teacher and pupil stress, cheating (by teachers as well as pupils), and stigmatisation of lower-performing schools.
It is difficult to see how value-added tables will cure these ills, especially if they bolster the fiction that a school's performance can be assigned a specific mark. As statisticians, such as Harvey Goldstein of London's institute of education, have argued that is unrealistic - no matter how much politicians would like it to be true.
Statisticians insist that because testing is not an exact science the Government should acknowledge that schools' "real" scores could be several per cent more or less than the figures published. They also warn that parents may be misled by a single value-added tally, some schools do more for high-achievers than low-achievers and vice versa. This may seem unduly purist but caution is necessary - particularly as the new value-added calculations include only prior attainment and ignore class size and social background.
There is one other important issue to bear in mind. Two years ago, Professor Dylan Wiliam, a highly-respected assessment expert, estimated that at least 30 per cent of 11-year-olds were wrongly graded. Thankfully for the Government, Professor Wiliam has since relocated to lusher pastures in the US. But he has left us with an as-yet-unanswered question. Is the entire performance tables edifice built on sand?