Skip to main content

Into an uncertain world

Statistical analyses that try to identify the 'value' that schools have added to their pupils' educational development should always be accompanied by health warnings. Lesley Saunders and Sally Thomas explain what they are

Value-added measures of performance are here to stay, judging by the Government's recent pronouncements.

From this autumn all primary and secondary schools will receive value-added analyses of their examination and national curriculum test results. The main purpose is "to assist schools in looking at their pupils' progress against the national picture".

The Government also intends to include value-added measures in published performance tables by 2000 for secondary schools and 2002 for primaries.

It is therefore crucial to get the interpretation and use of such data right. Value-added indicators are an improvement on raw results, which say a lot more about a school's intake than its effectiveness. Value-added analyses attempt to strip away factors associated with performance that are not related to institutional quality. These include pupils' prior attainment, gender, ethnic group, date of birth, level of special need and social disadvantage.

It is not unusual for prior attainment plus the level of disadvantage in a school (as measured by free school meals) to account for as much as 80 per cent of differences between schools.

Any numerical data - whether raw or value added - has the following limitations:

* a degree of statistical "uncertainty", because calculations based on numerical data can be as much the result of pure chance as of something more "real", such as the quality of education. Usually, only a few schools can beshown to be performing atlevels which are significantly above or below the norm in statistical terms.

* analyses of performance data - because necessarily retrospective - can easily be misleading about current and future performance.

* different outcome measures often give different results: a school with good results on five or more GCSE A*-C grades may be letting down less able pupils.

* likewise, a school may do well with some groups of pupils (Asian girls, for example) and not others (white boys), andor in maths but not English.

Most of this has been obscured by crude "league table" approaches. But value-added measures have helped to raise awareness of the complexity of performance data.

Value added has also become a handy way of describing a range of connected but distinct activities, including:

* making "like with like" comparisons of schools' or departments' performance;

* representing pupils' progress as well as their achievement;

* identifying which schools departmentsyear groups are performing above or below predictions;

* identifying which individual pupils are currently performing above or below predictions.

However, the application of statistical models is most appropriate to analyses of aggregate past performance (rather than to the prediction of individual current or future performance).

So, to summarise, value-added analyses:

* are only as good as the data they are based on;

* do not identify the causes of effectiveness or ineffectiveness;

* may tell us nothing about desir-able future performance;

* are only one instrument of evaluation, and must be set alongside more subjective data;

* provide no quick fixes or right answers to the problems of school improvement.

By themselves, value-added analyses cannot be validly used to make judgments about the effectiveness of a school, still less of a single class or teacher.

What they can do, par excellence, is to help to pose better questions about the way a school or local education authority has performed and to stimulate more informed discussion among school staff about how teaching is organised and delivered.

Perhaps there is now a need for a set of guidelines to help school managers and others to know what they can and cannot legitimately infer from complex performance analyses. We look forward to hearing readers' views.

Lesley Saunders is a principal research officer with the National Foundation for Educational Research, and Sally Thomas is a research lecturer at London University's Institute of Education.


* Data collected at the individual pupil level on a large and representative sample

* Outcome measure(s) reflecting all levels of pupil performance - not just, for example, five or more GCSEs at grades A-C

* Prior-attainment measure(s) for each pupil (preferably individual standardised scores), plus information about the pupil's background

* School context factors

* State-of-the-art statistical analyses

* Stringent quality control


* Results of analyses should be given in tabular, graphical and textual forms for individual schools or education authorities. These should at least show whether measured differences between schools are statistically significant.

* Wherever possible, results should be given at three stages of analysis:

"raw" results, results adjusted for pupil data (such as prior attainment) and results adjusted for pupil and school context data. Each stage of analysis usually offers some insight.

* At least three years' data is needed to establish if there is a trend upward or downward.

* If analyses are likely to be used or adapted to provide "predictions" of pupils' future performance, it must be made clear that the results are a statistical projection not a prophecy about real pupils.

* As data is not self-evident, a detailed commentary should accompany the analyses.

Log in or register for FREE to continue reading.

It only takes a moment and you'll get access to more news, plus courses, jobs and teaching resources tailored to you