Tests can only get better;Platform;Opinion
key stage results, Nicholas Tate defends the standards of the curriculum tests while advocating the need for a fundamental review of their purpose ONE DIFFERENCE between England and France, where I have just been on holiday, is that the start of the new school year in England is marked by the publication of the results of national curriculum tests taken before the summer break.
On this side of the Channel we have already had our 1999 A-level and GCSE results, together with the hype we have come to associate with this phase of the educational calendar. Next week sees the culmination of this phase with the publication of the 1999 national test results for seven, 11 and 14-year-olds.
Media interest in the key stage results will be equally keen, not least because of their implications for the 2002 targets. Following limited research, The TES has already suggested that we may see significant improvements in some subjects at some key stages. Next week's Government announcement will reveal to what extent these predictions are confirmed.
A striking feature of the media coverage of A-level and GCSE results was the belated recognition that the main explanation for improved results might well be that students were working harder and teachers teaching better. There was some intelligent discussion of the factors likely to account for this: the impact of target-setting, performance tables and inspections; increased openness about the examining process; a new commitment to continuous improvement; shifts in social structure and the consequences of these for expectations. There was less simplistic talk of "dumbing down" than I can remember from any previous year.Let's hope we have a similarly intelligent discussion about next week's results. Many of the factors that explain changes at 16 and 18 are at work earlier on.
At key stage 2 there is the additional factor of an unprecedented Government initiative to tackle low attainment in literacy and numeracy. All primary schools have been following the literacy hour and 70 per cent had already implemented the numeracy strategy before its formal introduction this month. This must have a positive impact.
Whatever the results - and there will be obstacles to progress as well as reasons for improvement to be identified - the merit of criteria-based national tests is that they tell us something about what is happening in the real world, difficult though this sometimes may be to interpret. They are quite unlike the old norm-referenced tests, which some critics amazingly still hanker after, which tell us precisely nothing.
One positive feature of this year's coverage of A-level and GCSE results was broad acceptance of the evidence that standards are being maintained. There is similar evidence for key stages 2 and 3, where, unlike GCSE and A-level, there is only one test agency and only one test in each subject and where level-setting procedures are some of the most sophisticated in the world.
These procedures were recently vindicated in Jim Rose's inquiry into last year's key stage 2 tests. This concluded that marks to achieve each level were set with great care, that there was no "fiddling" with the difficulty of the tests to improve the results, and that "our system of national tests is well in advance of that of many, if not all, of our international counterparts".
The Qualifications and Curriculum Authority has responded to the inquiry's recommendations. While maintaining consistency of testing procedures be-tween now and 2002, we have been keen to improve arrangements wherever we can. We will be setting up a new group, including independent assessment experts, to review all technical aspects. We will make further improvements to ensure an even higher quality of marking. We will be involving teachers even more, in scrutinising draft tests and commenting on children's responses in the pre-tests, while keeping a close eye on security. We will be making available more teacher assessment data to help in level-setting.
Some of these measures will first take effect in relation to the 2001 tests. The 2000 tests are now largely completed, having been developed after three trials and pre-tests involving 17,000 pupils from different schools. Altogether, 330 teachers and other experts have commented on the tests. I am confident we have a high-quality product that will serve its objectives well.
One of the recommendations of the inquiry was to review the purposes of the tests as they might develop after 2000. Our predecessor body, the School Curriculum and Assessment Authority, took a long look at this issue, identified some options, but recommended stability in the short term. Both this and the previous government have taken a similar view.
Longer term, given the huge level of investment, a more fundamental review should not be ducked. In response to Jim Rose's recommendations, we have put suggestions to Education Secretary David Blunkett about how such a review might be conducted.
Our national curriculum test system is ambitious and sophisticated. We must ensure that it inspires confidence and that the tests are fit for the purpose for which they are intended.
Dr Nicholas Tate is chief executive of the Qualifications and