I was disappointed to read the front-page report on the Manchester study claiming that the national tests for 11-year-olds are unreliable in reporting national standards (TES, October 3). Such claims do a serious disservice to the many schools and teachers who are making genuine improvements in their pupils' performance.
Great care is taken to keep the standards on the tests consistent from year to year. They are pre-tested on more than 200 pupils who have just taken the previous year's tests, so that we can establish the equivalence between a level on one year's tests to the same level on the next. This method of calibrating standards from year to year is extremely robust, and means that we can now use the national tests to measure trends in performance.
So why does the Manchester study apparently contradict this? First, the test used in that study was written in the mid-1980s, several years before the national curriculum was introduced, and it no longer accurately reflects what is taught in schools.
Second, the sample size was so small - fewer than 200 pupils in each cohort, taken from only five schools in one local area - that it can hardly be described as representative of the national picture. Third, the most recent figures from Manchester are from cohorts of pupils tested in 1996, and are therefore a year out of date.
The national test figures therefore probably tell a much truer story about current standards.
They show that performance in mathematics is now starting to improve, after many years in the doldrums.
For that, the credit must go to the hard-working schools and teachers who are serious about analysing their results and targeting improvement. Please let us not belittle them.
Head of curriculum and assessment Qualifications and Curriculum Authority Newcombe House 45 Notting Hill Gate London W11