Some especially difficult intellectual challenges come to be regarded as "swords in the stone". Pierre de Fermat's last theorem, which appears to have been proved finally by the mathematician Andrew Wiles, was one. Working out the perfect statistical method for assessing how much schools contribute to a pupil's academic success is another.
The Inner London Education Authority and a few other LEAs began to tug at the value-added sword in the late 1970s. More researchers took up the challenge after the Thatcher government announced - as long ago as 1981 - that it intended to publish exam results. The following year, the then Social Science Research Council presented Sheffield University with a four-year research contract that generated some early versions of multi-level analysis of school performance (see John Gray's article below), and in 1983 the first value-added service for schools, ALIS, which compares A-level results with GCSE scores, was established at Newcastle University.
Throughout the remainder of the 1980s, statisticians developed software that could provide ever more accurate value-added analyses. But it was only in November 1992, with the publication of the first school performance tables, that their work took on new urgency. The teaching profession has, of course, repeatedly complained that raw results are misleading, and Sir Ron Dearing, in his interim report last year, warned that they could encourage unwarranted complacency in schools with high-ability intakes, and despair in those catering for less able pupils.
Now Gillian Shephard, the Education Secretary, has in effect conceded that those misgivings are justified and has commissioned research into three simple models for value-added analyses.
None the less, the research and analysis that has been going on in up to 30 LEAs in England and Wales - either on their own or in tandem with the universities of Newcastle, London, Sheffield, Cambridge and Leeds, and the National Foundation for Educational Research - is certain to continue because the Government's "lead" has been too late in coming. Similar work is under way in Scotland, where a sophisticated statistical model has enabled schools to compare the performance of subject departments since the 1970s.
Some research projects have compared GCSE scores with attainment at 11, but where no Year 6 data is available, social background variables have been used to interpret exam results. Other researchers have opted for a mixture of the two strategies.
The wide variety of approaches, and Professor Carol Fitz-Gibbon's characteristically combative contribution to this Research Focus (see opposite page), suggest that the research community is deeply divided over methodology. But in reality there is a convergence of views on what the important indicators of exam performance are, and a belief that the value-added sword is at least loosening. As Professor John Gray, director of research at Homerton College, Cambridge, said: "It may seem hard to believe that there is this convergence, when one authority is looking at prior attainment and another is counting the number of free school meals. But sometimes researchers have to say 'This is what we have in our cupboard. What sense can we make of it'?" Prior attainment is seen as the most important predictor of performance at 16 and 18. It is generally thought to account for perhaps 50 per cent of the differences between schools' GCSE scores.
But, paradoxically, worthwhile research can still be undertaken without any scores at 11 to refer to. Nottinghamshire, which has been working with Sheffield University, looked only at gender, parental occupation and eligibility for free school meals, and yet mounted one of the most impressive of the local authority projects. It found that these three factors helped to explain just under a third of the differences between schools' GCSE points scores. It also concluded that the wrong choice of secondary school could cost a pupil four or five A-level points (less excitingly, it is also true that most schools achieve roughly what is expected of them, given their pupil intakes).
Nottinghamshire's scrutiny of 9,427 GCSE pupils (95 per cent of the 1993 cohort) revealed that GCSE points scores had increased not because marking had become more lenient but because pupils were being entered for more exams. Its other important - and worrying - findings, which should only be read after a reminder that race, class and gender do not automatically predetermine pupils' performance, were that: * the children of professional parents were entered for two extra exams on average and achieved more than double the points score of pupils from manual homes; * girls gained four points more on average - the equivalent of a grade D; * candidates born in September and October amassed more GCSE points than their younger classmates; * the average points scores of white, Asian and Afro-Caribbean children were 30.2, 27.9 and 19.8 respectively.
When such findings were taken into consideration, it became clear that the school which had ostensibly done best at A-level had actually been bettered by 34 other secondaries.
Nottinghamshire has not illustrated this by an alternative value-added table but it could probably be done. It is often said that such tables are too difficult to compile, either because schools now offer a mix of vocational and traditional qualifications or because the margin of error cannot be presented in a way intelligible to parents.
Nevertheless, Camden last year published some rudimentary value-added tables which showed very clearly how children in bands 1, 2 and 3 of the London Reading Test (taken at 11) had performed in their GCSEs. And now Lothian has become the first Scots authority to issue a table charting progress in the year between Standard and Higher grade exams.
Value-added tables are not only complex to construct, however. They can also be almost as misleading as raw results. Oxfordshire found, for example, that one school moved 14 places in the rankings depending on whether the measure of GCSE success was four, five or six A-C passes. This is one reason why virtually all the current value-added projects are not designed to yield tables but to provide confidential information that helps schools to target subjects, pupil groups or even individual pupils who merit special attention.
ALIS (see project list) offers schools this service. But it also disseminates general findings through chatty newsletters that make fascinating reading, once the statistical jargon has been mastered: "Look forward to your pupil level residuals towards the end of September."
For example, A-level physics candidates who do not take a maths A-level tend to achieve a grade and a quarter less than expected. Biology results have also been shown to be largely dependent on candidates' understanding of chemistry and maths, and many ALIS schools have adjusted courses accordingly.
Vital though such research is, it will not satisfy politicians - of the right and left - wedded to league tables. Most parents appear to want tables too, no matter how often they are told not to take them at face value. Cancelling their publication until a fairer method of rating performance can be devised does not seem to be an option. That is a pity, because it would appear to be the wisest thing to do.