Few would fault David Blunkett's objectives. As part of his strategy for improving Britain's education standards, the Education and Emloyment Secretary has embarked on two significant reforms. First, the setting of targets for schools and Britain as a whole, and second, the provision of value-added data, to enable parents, teachers, politicians and administrators to make realistic judgments about the performance of each school.
The ambition is worthy and the need is urgent. The problem, rather, is that these reforms may collide with each other, with the result that neither works as well as it should. The solution is to separate them. Target-attainment by pupils should be measured by quite different tests from value-added by schools.
Instead, the early signs are that collision is more likely than separation. Within two weeks of becoming Education Secretary, Mr Blunkett announced targets for English and maths for 11-year-olds which were to be reached by 2002. Other targets have been set for 19-year-olds, 21-year-olds and lifetime learning (see box, right). Mr Blunkett did not pluck these figures out of the air: in the main he has followed the recommendations made last year by the Government's School Curriculum and Assessment Authority. The measuring rods for the targets to be reached by school pupils are the main exams - key stage tests at 11 and 14, GCSEs, A-levels, GNVQs and NVQs.
In July, the Government also endorsed SCAA's proposals for measuring schools' value-added. Once again, the measuring rods are to be the main national tests. SCAA announced that this value-added data "will contribute to schools' own analyses of their performance and help them set targets for improvement".
Taken together, we are heading for a world in which national tests are required to fulfil four distinct functions: assessing the progress of individual pupils, generating value-added information about schools, measuring the progress of schools towards specific targets, and measuring the progress of Britain's schools and pupils overall towards national targets. This is far too big a load to place on a single sequence of tests. Were education policy being made by trained economists, they would know precisely why. They would cite "Goodhart's Law" and tell SCAA to think again. The law is named after Charles Goodhart, a former chief economist at the Bank of England. He showed that perfectly good economic indicators tended to misbehave once they were adopted as policy targets. The classic, though far from only, example, helps to explain why unemployment more than doubled to three million in the early Eighties.
During the Sixties and Seventies, monetarist economists argued that money supply figures tracked inflation closely. When money supply grew too fast, inflation rose; when money supply grew more slowly, prices were more stable. At the same time - and this, for monetarists, was the clincher - there was no clear link between money supply and output, except in the very short term. The lesson seemed obvious: if the Government and the Bank of England controlled the money supply more tightly, then Britain's economy would grow just as fast, but inflation would stay low.
When Margaret Thatcher became Prime Minister in 1979, that is precisely the policy she introduced. And what happened? Goodhart's Law operated with a vengeance. Interest rates were pushed up to terrifying levels. The pound soared to ridiculous heights. Many British manufacturers found they could not compete with their overseas rivals. And the economy dived into its deepest recession since the Thirties. The very act of making money supply the main policy target changed the relationship between money supply and the rest of the economy.
Although Goodhart's Law springs from economics, it applies to life generally. To see how it could trip up education policy, we need look no further than the report that SCAA commissioned on value-added.
The report contains the findings of a two-year research project led by Professor Carol Fitz-Gibbon and Peter Tymms of Durham University. Section six of their report, blandly entitled "Wider issues", issues a series of stark warnings about the quality and use of exam-based data: Tunnel vision: Schools might stress achieving targets at the expense of non-curriculum subjects and other educational functions.
Sub-optimisation: Schools might enter more pupils for "easier" subjects, and direct resources to those pupils who could yield the highest value-added scores, rather than provide a balanced service across all subjects and all pupils.
Myopia: Achieving high value-added scores might take precedence over the provision of skills of lasting benefit to pupils in adult life.
Measure fixation: If the key measure for judging secondary schools is the number of pupils gaining five or more A-C grades, resources might be skewed to pupils expected to gain mostly Ds, in the hope of pushing them into the five A-C "pass" category.
Misinterpretation: Value-added is a complex matter that cannot be readily reduced to a single number or a simple set of cause-and-effect conclusions.
Misrepresentation: Schools and exam boards might cut corners in the way exams are conducted and papers marked.
Gaming: A school might seek to ensure that its intake-measure is as low as possible in order to depress the baseline on which value-added is calculated.
Ossification: Rigid systems are likely to produce increasing distorted data as time goes by.
The authors make some tough recommendations: that value-added "profiles" of schools should be published, rather than league tables based on single indicators; that value-added data should be adjusted to take account of separate issues, such as truancy and pupil turnover; that "more stringent quality assurance procedures" be introduced into the marking of exams; that a new system should be developed for monitoring possible shifts in standards from year to year; and so on.
These recommendations would go some way towards reducing the dangers in the present system. But some risks would remain; and if the same sets of exams were used not just to compare schools but to set school-by-school targets, as SCAA wants, then the pressures to meet those targets, by fair means or foul, would be intense.
So the SCAA report suggests that it would be dangerous to use exam results to provide both value-added and target data. If ever Goodhart's Law needed to be invoked in order to avert disaster, it is here.
As it happens, Fitz-Gibbon and Tymms have developed their own range of simple tests to assess not only the development of basic skills such as English and maths, but wider issues such as social development and, in some tests, self-esteem. Different tests have been devised for each stage of schooling, from Year 1 through to the sixth form (PIPS, YELLIS and ALIS).
Last year 500,000 pupils in more than 2,500 schools did these tests. Early findings show that this system is capable of generating standardised value-added data that avoid many of the defects of exam-based data.
In an ideal world, careful thought would have been given to value-added data and targets before league tables appeared and before ministers set targets. Instead, we must start from where we are, with the exam system as it is and the targets that have been announced.
Apart from implementing the detailed reforms in the SCAA report, the most useful thing that David Blunkett could do now is to relieve some of the burden that is building up on exam results, and announce a twin-track approach. If exams are to be the means by which individual children are to be judged, and the yardstick by which national achievement is to be measured, then that should be the limit of their function.
One final reform is long overdue. There is something unhealthy - perhaps uneducated - about drawing up school league tables based on a single indicator. Too many trees have been sacrificed in recent years to produce the paper on which such ludicrous and misleading tables have been printed.
Mr Blunkett's task is not only to drive up standards in our schools, but to ensure that the data by which we judge the progress of pupils, schools and Britain as a whole is beyond reproach.
National Targets for 2000 Foundation learning
* 85 per cent of 19-year-olds to achieve five GCSEs at grade C or above, an Intermediate GNVQ or an NVQ level 2
* 75 per cent of 19-year-olds to achieve level 2 competence in communication, numeracy and IT by age 19; and 35 per cent to achieve level 3 competence in these key skills by age 21
* 60 per cent of 21-year-olds to achieve two GCE A-levels, an advanced GNVQ or an NVQ level 3
* 60 per cent of the workforce to be qualified to NVQ level 3, Advanced GNVQ or two GCE A-level standard
* 30 per cent of the workforce to have a vocational, professional, management or academic qualification at NVQ level 4 or above
* 70 per cent of all organisations employing 200 or more employees, and 35 per cent of those employing 50 or more, to be recognised as Investors in People
These targets, produced under the last government, are under review National School Targets for 2002
* 80 per cent of 11-year-olds to achieve level 4 in English
* 75 per cent of 11-year-olds to achieve level 4 in maths
The targets were announced by the Secretary of State, David Blunkett, in May
Advisory National School Targets
* 75 per cent of 11-year-olds to achieve level 4
* 80 per cent of 14-year-olds to achieve level 5 and 50 per cent level 6
* 55 per cent of 16-year-olds to achieve five GCSEs at A-C or equivalent
These targets were set out in advice to the previous government by SCAA and NACETT in September last year
References: The Value-added National Project Final Report, by Carol Fitzgibbon, published by the SCAA, July 1997. Consultation period ends: October 31. Target Setting and Benchmarking in Schools, Consultation Paper, September 1997. Consultation ends: November 14
SCAA Publications, PO Box 235 , Hayes, Middlesex, UB3 1HF. Tel. 0181 867 3299