So, league tables of standard attainment tests results at key stage 2 are with us at last. Despite being widely rehearsed over recent years, the arguments against them have failed to prevail. There would appear to be an unshakeable conviction on the part of this Government that making comparative national assessment results available - regardless of accuracy - will be a good discipline for schools and teachers and, at the same time, prove popular with voters hungry for information that will apparently allow them to compare standards.
Such judgments are the proper preserve of politicians, but they need to be informed by an understanding of the likely consequences for pupils, teachers and education .
For eight years, our Primary Assessment, Curriculum and Experience project (funded by the Economic and Social Research Council) has been monitoring the impact of changes following the 1988 Education Act. Our research has documented the different stages of its implementation: the early phase of anger, confusion and panic on the part of teachers; the subsequent gradual acceptance of the national curriculum but the persistent problem of overload; the crescendo of protests about the accuracy and appropriateness of national assessment; and finally, the calmer waters post-Dearing.
In the past few years, primary school teachers have got used to heavy workloads. They have learned to work more collaboratively. They have re-focused their curriculum priorities and changed their approach and come to grips with the need for differentiation and an increasing level of professionalism in assessment.
Even the violent antipathy that many felt towards SATs just a year or two ago has been replaced by a broad acceptance of the inevitability and even the desirability of key stage testing. The publication of primary league tables may well open up these recently healed wounds. The exposure to public scrutiny of school results that are not a valid representation of a school's quality is likely to re-open the whole issue of the purpose of national assessment. Who is it for? How useful is the information, and what price are individual schools, teachers and children paying for it?
Our research provides some general answers to these questions. More particularly, it underlines the undesirability of publishing league tables of results - first, because the data is insufficiently reliable; second, because the results serve no real purpose and thus the expense of producing them cannot easily be justified; and third, because of the undesirable effects of so doing on the curriculum and teacher morale.
During the administration of SATs last summer, the PACE team observed testing in action, talked to teachers and pupils about their experiences and sent out a questionnaire. In the light of these, it is impossible not to be worried about the overall accuracy of the KS2 assessment data. On individual pupil performance, teachers reported a range of factors that they felt could prevent some pupils doing their best: nerves, poor reading skills, a poor ability to memorise and so on. While these are well-known problems with any examination, they are likely to be more acute for young children unfamiliar with the demands of formal testing.
There must also be doubts about the comparability of the results at school level. Our study found, as did similar studies, that despite the explicit instructions provided by School Curriculum and Assessment Authority, there were considerable variations in the procedures used before, during and after the tests. We concluded that in schools where there is little or no tradition of formal, external examinations, teachers find it almost impossible to step outside their role of supporter and facilitator to adopt that purely of an invigilator.
Given the scale and complexity of national assessment, it would be surprising if all the potential sources of variability had been removed at this stage. That they have not is a cause for concern, especially since the results are to be the subject of the explicit public comparison that the league tables invite.
But how useful are the results for other purposes? The potential uses include: * the facilitation of pupil transfer; * assisting the school in reviewing its teaching and assessment practices; * accountability to governors, parents and thecommunity.
The results come too late to be useful for pupil transfer, and they are not designed for the fine discrimination that would be needed for any diagnostic purpose. It is equally doubtful whether primary schools themselves can make much use of the data. Our studies provide little evidence of test scripts or of the results as a whole being used to inform teaching and learning practice or policy in individual schools.
This leaves accountability. How concerned are parents, for example, about the results? In 1996 the teachers PACE spoke to reported little evidence of strong feelings among parents about national assessment results. Although naturally anxious that their children should do well, many parents did not really understand the results, the distinction between teacher assessment and SAT results and what the levels meant.
This week's league tables are likely to alter this situation rapidly. Their publication will make parents much more concerned to compare schools' results, though probably still with little understanding of the limitations of league tables. It will force schools to concentrate even more on "getting the scores up" by teaching to the test and explicit drilling in test-taking skills. The effect of this is likely to narrow the existing broadly-based curriculum, and to introduce an emphasis on memorising specific facts and particular language and maths operations. All this just at the time when our evidence showed that, after the years of turmoil, national assessment was at last showing signs of becoming embedded as an acceptable and constructive feature of primary schooling.
However, there were also clear signs that this very manageability disguised the fact that, in its existing form, national assessment at KS2 serves little purpose for parents, teachers, or receiving secondary schools. The indications are that parents are interested in having much more detailed information about their child. For teachers, the information, although of some interest, is generated too late to be of use in informing their practice. For secondary schools, the results come too late to inform class allocations and anyway are too broad to be usefully diagnostic.
For Year 6 pupils, assessment is now an established rite of passage. As with the 11-plus, many will accept the verdict on their performance with all that this implies for their developing self-image as they enter the hurly-burly of secondary school. Yet, as a device for comparing schools, KS2 assessments must be seen as significantly flawed. The information they contain is weak in terms of reliability and validity and insufficiently contextualised to have much meaning.
Above all, there is the waste of scarce resources and the missed opportunity to establish a national assessment system that has the potential to enhance learning. It is too early to say what the effect of KS2 league tables will be. Certainly it is hard to find any grounds on which to disagree with the teacher quoted in our research - "It's no use to anyone" - except, of course, as a potential vote-catcher. But if "It's no use to anyone", it will almost certainly be damaging to many.
Professor Patricia Broadfoot is head of the School of Education at Bristol University