Can things only get better because tests are easier?
Don't be surprised if sometime in the next month you are passed by a Labour "battle bus" proclaiming at megaphone volume the improvements in the literacy and numeracy of 11-year-olds achieved by this Government.
Labour will rely upon the rise in key stage 2 scores, more than any other statistic, to convince voters that "education, education education" was more than just a slogan.
And it is easy to see why. When David Blunkett announced his literacy (and later numeracy) targets for primary schools - complete with a pledge to resign if they were not met - there was no shortage of commentators willing to tell him that he had bitten off more than he could chew.
But despite those predictions, the number of pupils reaching the "expected" level 4 has risen dramatically. In 1995, just 48 per cent of pupils reached the expected level in English. This rose to more than 60 per cent in 1997, and by last year three-quarters of pupils reached level 4 - well on track to meet the target of 80 per cent by 2002. Similar improvements have been seen in maths.
But not everyone in the education world is impressed. Critics point out that improvements in reading scores have disguised the fact that attainment in writing has remained stubbornly low.
Last month The TES reported on research from Homerton College, Cambridge, which suggested that reading tests have become easier. There has apparently been an increase in the number of questions that require pupils to retrieve information from a text, and a corresponding decrease in the need for pupils to use "higher order skills" such as deduction and inference.
So what is the truth about these tests? Have we really seen a revolution in the achievement of our primary pupils or have easier tests artificially boosted their results?
Pam Sammons of the Institute of Education in London, who has conducted a project on the tests in Southwark, believes that changes in teaching methods and the Government's target-setting have brought about higher standards. "Improvements are very much down to the literacy strategy," she says. "I know when they develop these tests that they spend a lot of time looking at standards year on year. Changes in tests may have had a small impact but they certainly wouldn't explain the improvements we've seen.
"We've heard positive remarks from teachers in Year 7 about the improvement in the knowledge of pupils starting secondary school. Much of this has been down to a rise in the attainment of disadvantaged pupils," she says.
But evidence from other studies casts doubt on this view. Peter Tymms and Carol Fitz-Gibon of Durham University have compared the dramatic rise in reading test scores in key stage 2 tests with the trend in achievement of 11-year-olds measured by researchers between 1975-2000.
As the graph shows, KS2 reading test scores show a bigger change in four years than any of the other data sources have tracked over 24. For example, the Performance Indicators in Primary Schools study published by Peter Tymms in 1999 found that reading standards were remaining constant at a time when KS2 results were rising.
"The changes seen between 1995 and 1999 are so dramatic, and so out of step with the other data, as to raise questions about their being true representations of changes in standards," Tymms and Fitz-Gibbon say.
But they reject the "cynical" view that the Qualifications and Curriculum Authority is deliberately making tests easier or adjusting the cut-off marks to ensure that targets are met. Instead, they suggest that the tests may not be a good way of accurately measuring changes in standards over time.
"If teachers were concerned about the results and were teaching exam techniques, then what we might see is a capacity to pass tests rather than an indication of pupils being better at English, science or mathematics. This point highlights the problems associated with the accountabilityassessment system currently in existence in England and Wales," they say.
Tymms and Fitz-Gibbon also suggest that the difficulties involved in ensuring that each year's test requires the same standards as its predecessors, and the fact that the QCA only checks tests against the previous year's, could explain the disparities.
A few words of caution should be added. Many of the research projects upon which their conclusions are based are relatively small-scale. Tymms and Fitz-Gibbon themselves describe their conclusions as "tentative", and call for an independent representative sample of pupils to be assessed on the same test each year to allow standards to be more accurately monitored over time.
But although less damning than the Cambridge University research, their work raises serious questions about the accuracy of Labour's claims. And because the concerns do not focus specifically on the nature of English tests they could also apply to improvements in maths and science.
At the very least there are doubts about the extent of the improvement in literacy among our primary pupils. But don't expect to hear that from a Labour battle bus.
'Standards, achievement and educational performance: a cause for celebration?' by Peter Tymms and Carol Fitz-Gibbon is included in 'Education, Reform and the State', edited by Robert Phillips and John Furlong, Routledge Falmer, pound;25