The Scottish Office has decided to end the national monitoring of pupil achievements as we know it. No longer will there be teams of independent researchers, for English language, for mathematics and for science, contracted specifically to conduct surveys of what pupils know and can do at P4, P7 and S2. Instead, it has been decided to centralise operations in the Scottish Office using one "national co-ordinator" for all three subjects (and perhaps for others should policy change) with subject expertise to be bought in, contracted for limited timescales. The recent advertisement for two mathematics experts, for example, indicates 114 days to be the requirement for the preparation for and conduct of the fifth mathematics survey in 1997.
As one of those responsible for the science surveys (we are involved in the data analysis of the results from the fourth science survey carried out earlier this year) I should like to express my regret that the Scottish Office has not taken the advice tendered by all three research teams. At the programme committee for the AAP in June, the lead researchers from Edinburgh University, Strathclyde University, and the Scottish Council for Research in Education objected to proposals from the Research and Intelligence Unit. Despite the consensus in our reactions, there has been no change of plan by the Scottish Office; the Department has gone ahead with what it outlined in June. In my judgment, the signs for the new arrangement do not look good. A significant shift of policy has taken place, and one has to wonder where things are going. This has happened despite the experience and integrity, the efficiency and productivity of the researchers in all of the recent surveys - more than a decade of good practice, recognised by Scottish teachers as the only independent source of reliably-obtained indicators of what pupils do achieve, and internationally admired. No other country has used regular, light sampling national surveys conducted independently of national testing.
In recent years, the curriculum fit of assessment tasks in the light of 5-14 targets has become vital and demanding. The research teams have done much to interpret and amplify the national targets in ways which help teachers. All of the assessment instruments have survived careful scrutiny by advisory committee members where experienced primary and secondary teachers and others have given their backing to material carefully produced, piloted and made the subject of subsequent thorough analysis with data generated from the performance of children in carefully chosen national samples (thanks to the involvement of the Central Support Unit based at SCRE). Even the views of an independent, non-Scottish, consultant are on record as to the quality of the operations.
A question of money? Each of the research contracts has fallen in the Pounds 80K to Pounds 150K range for the 21Z2 years of work (and most of the money pays for researchers' time, though there are very significant costs associated with the printing of materials, paying field officers, script marking, and so forth). The rolling programme results in one subject being surveyed each calendar year, so annual costs to SOEID must be in the region of Pounds 100,000. Judging by international perceptions of what is achieved under the present arrangement, this seems good value for money.
My colleague, Rae Stark, has spoken at international conferences in the United States and I have addressed Australian and New Zealand colleagues concerning our work on the Scottish Science surveys. They have been impressed by our inclusion of practical forms of assessment. International feedback to us, generally, has always been positive, admiration to the point of envy concerning the Scottish practice of maintaining both national monitoring and national testing.
The two kinds of assessment serve different purposes and their conduct in quite different ways is educationally justified. The AAP's data, drawn from the performance of carefully-obtained pupil samples on fairly substantial numbers of assessment tasks, provide objective measures of pupil standards, without compromise to teachers, nor crushing workloads for particular schools (a multi-matrix design ensures that no single pupil has to undergo an unreasonable amount of testing). Furthermore, the relative stability of the survey teams has provided continuity which is efficient and economical.
Could it be the results of the surveys are troubling Scottish Office?
This is an area full of interest, contention and political debate, especially now that AAP is, in effect, producing a measure of the effectiveness of 5-14 implementation. The assessment frameworks of AAP have, quite reasonably, been brought into line with the guidelines provided for each of the subjects in question. Bear in mind also that English language and mathematics both incur national testing (5-14) as well as national monitoring (AAP); science only has AAP, so far. The comparisons over time have been revealing; disappointment has been felt by all at the declining standards for some aspects of English language and mathematics, especially at S2; pleasure has been shared concerning the apparent stability in the science figures (although here too, S2 performances always disappoint in comparison with those at P7).
History has shown that researchers have done their very best to produce sound data and justified conclusions. Clearly, researchers and SOEID don't see eye to eye about the way to run future surveys. For example, at the June meeting when RIU outlined their calculations to buy in subject expertise on the top-sliced model, we were surprised at the underestimates of the time and effort required to carry out AAP tasks, never mind the simplistic assumptions about what expertise you can buy in this way and still get everything done. Allowing for the time a future "National Co-ordinator" will spend on any one survey at a time, conservative estimates in the case of maths put the 114 days of subject expertise out by a factor of 5.
The more important point is the bureaucratic centralisation which is about to take place. A "National Co-ordinator" working within the Scottish Office just won't be allowed to own the programme in a way that the individual projects have been owned to date. Everyone (except the SOEID, it would seem) knows about the ownership of skilful work and what drives quality operations.
A national administrator might have been one thing, but the appointment of a national co-ordinator and the concomitant disbandment of three research teams, recognised for their integrity and independence, is quite another. But I suspect there is more (meaning worse) to come. Isn't this just setting the scene for a further shift in policy concerning testing and what the system is claimed to produce?
I fear the very worst and that the loss of AAP in its present form is deliberate, clearing the pitch for something different. If SOEID is making changes to yield an outwardly-perceived reduction in the cost of national monitoring, then I suspect that on full accounting, at the end of the day, there will be little saving, if any. However, as I have tried to argue, quality operations are now in jeopardy. Maybe this is simply the start of a rundown making it more likely that national testing will replace AAP (as per south of the border).
Goodbye AAP; goodbye good practice. The international embarrassment of this will be hard to take.
Tom Bryce is a professor in the Faculty of Education, University of Strathclyde.