Value-added measures of performance are here to stay, judging by the Government's recent pronouncements.
From this autumn all primary and secondary schools will receive value-added analyses of their examination and national curriculum test results. The main purpose is "to assist schools in looking at their pupils' progress against the national picture".
The Government also intends to include value-added measures in published performance tables by 2000 for secondary schools and 2002 for primaries.
It is therefore crucial to get the interpretation and use of such data right. Value-added indicators are an improvement on raw results, which say a lot more about a school's intake than its effectiveness. Value-added analyses attempt to strip away factors associated with performance that are not related to institutional quality. These include pupils' prior attainment, gender, ethnic group, date of birth, level of special need and social disadvantage.
It is not unusual for prior attainment plus the level of disadvantage in a school (as measured by free school meals) to account for as much as 80 per cent of differences between schools.
Any numerical data - whether raw or value added - has the following limitations:
* a degree of statistical "uncertainty", because calculations based on numerical data can be as much the result of pure chance as of something more "real", such as the quality of education.
Usually, only a few schools can be shown to be performing at levels which are significantly above or below the norm in statistical terms.
* analyses of performance data - because necessarily retrospective - can easily be misleading about current and future performance.
* different outcome measures often give different results: a school with good results on five or more GCSE A*-C grades may be letting down less able pupils.
* likewise, a school may do well with some groups of pupils (Asian girls, for example) and not others (white boys), andor in maths but not English.
Most of this has been obscured by crude "league table" approaches. But value-added measures have helped to raise awareness of the complexity of performance data.
Value added has also become a handy way of describing a range of connected but distinct activities, including:
* making "like with like" comparisons of schools' or departments' performance;
* representing pupils' progress as well as their achievement;
* identifying which schools departmentsyear groups are performing above or below predictions;
* identifying which individual pupils are currently performing above or below predictions.
However, the application of statistical models is most appropriate to analyses of aggregate past performance (rather than to the prediction of individual current or future performance).
So, to summarise, value-added analyses:
* are only as good as the data they are based on;
* do not identify the causes of effectiveness or ineffectiveness;
* may tell us nothing about desirable future performance;
* are only one instrument of evaluation, and must be set alongside more subjective data;
* provide no quick fixes or right answers to the problems of school improvement.
By themselves, value-added analyses cannot be validly used to make judgments about the effectiveness of a school, still less of a single class or teacher.
What they can do, par excellence, is to help to pose better questions about the way a school or local education authority has performed and to stimulate more informed discussion among school staff about how teaching is organised and delivered.
Perhaps there is now a need for a set of guidelines to help school managers and others to know what they can and cannot legitimately infer from complex performance analyses. We look forward to hearing readers' views.
Lesley Saunders is a principal research officer with the National Foundation for Educational Research, and Sally Thomas is a research lecturer at London
University's Institute of Education.