It seems that the strongest argument in favour of the tests is that they allow primary schools to measure their effectiveness on the famous level playing-field of value-added. The Department for Education and Skills website gives very detailed calculations for setting the average point scores at key stage 1 against the results four years later at key stage 2.
With the inclusion of a performance figure in the league tables we thought the battle for value-added had been won.
How wrong can you be? A primary school near me recently had a local authority inspection where the effectiveness of the school was the main issue.
The head, anxious to know how her school would stand up in the analyses now available, went painstakingly through the last two years of leavers from her school, using the DfES methods of measuring her value added.
She was pleased to find that, though her school was operating in what is called challenging circumstances, her added-value for her leavers of 2001 and 2002 was, in both cases, clearly a small amount ahead of the national average. With this encouragement she prepared with confidence for the inspection.
At the feedback the inspector leading the team acknowledged the dedication and good teaching in the school and the strength of the senior management team. However, the school's raw test performance was below the national average. The head produced her analyses to explain this, but the inspector refused to look at them, dismissing them as "excuses for poor performance".
One is left wondering why such tools of analysis are made available if inspectors can so lightly dismiss them.Has the whole debate on value-added been meaningless or is it that inspectors themselves do not understand the concept?
If the DfES cannot communicate its policies to LEA inspectors, then surely the only remaining argument in favour of key stage 1 tests can no longer convince anyone at all.
199 Nutfield Road