Skip to main content

If it's wrong, then tell us;Opinion;News amp; Opinion

The audit unit should publish the information that would allow a proper rebuttal of its statistical reasoning , says Graham Hewitt

RECENT articles on Standard grade point averages (GPAs) highlight a problem with statistics: those who understand them can bamboozle those who don't, while those who don't, distrust them as the works of the devil.

The Inspectorate's audit unit presents GPAs and value-added measures as if they were definitive and irrefutable statements about pupils and schools, and draws conclusions about the quality of teaching using statistical procedures never designed for that purpose. This creates suspicion in the minds of teachers and brings the whole project into disrepute.

Regression analysis, which is the statistical technique of the moment, was designed to make predictions about one variable, let us call it performance at Higher grade, based on the known values of another variable, let us call it Standard grade results or GPA.

By looking at the performance of a cohort of pupils on both of these variables, a regression equation can be derived which will allow us to make predictions about the Higher performance of another cohort whose GPAs at Standard grade are known. This is a perfectly valid and useful technique and enables schools to set realistic targets for pupils.

However, the predictions should carry a health warning, their limitations must be understood and the hijacking of the technique by the value-added industry intent upon blaming teachers when targets are not met should be strongly resisted.

First, as advertisements for financial products say, "past performance is no guide to the future". We cannot be certain that the performance of one cohort will be identical to the performance of another, although it is probably reasonable to assume that nationally it will be roughly similar, even if individual schools show large variations.

Second, because all measurement is subject to error, the audit unit, which disingenuously stated that it didn't want to complicate the issue by providing additional statistics, really should provide standard errors and correlation coefficients. Without the correlation we don't know how strong the relationship is between GPA and Higher performance in a particular subject and without knowing the standard error for a given prediction we cannot come to any conclusion about its accuracy.

With knowledge of the standard error, we can be confident that for any pupil their score will lie within plus or minus one standard error of the prediction for two-thirds of the time, or that in 95 per cent of cases it will fall within plus or minus two standard errors.

All right, it doesn't sound very incisive to tell a pupil, "the odds are two to one that you will score between band eight and six in your Higher English", but that is to miss the point.

The prediction is a guide to be used by teachers in conjunction with their professional knowledge and expertise to set realistic and attainable targets for each pupil, a guide which is based on evidence rather than on gut feeling, prejudice or nothing at all.

Third, the GPA is a very crude measure, adding together apples, oranges and lemons to produce a single figure, ignoring the fact that each grade covers a wide band of performance, but at the moment it's the best measure there is, though we don't know how good because we don't have the correlation. Hopefully, better predictors will be developed and there is some evidence that other combinations of scores may have higher correlations than the GPA.

Fourth, when pupils, departments or schools fail to attain the targets they "should have met" based on GPA predictions, it is not possible to draw any conclusions about the reasons for that "underperformance" without much further investigation. After all, the "underperformance" may well be within the margin of error explained above.

The simplistic conclusion that a negative "value-added" (value-subtracted?) is solely due to inefficient teaching is not warranted. There could be many other factors, not least the attitudes, motivation and effort of the pupils beyond what teachers can hope to influence, the crudeness of the GPA measure, the different content of Standard grade and Higher exams, and so on.

However, schools could use such measures internally as a starting point for examining their own effectiveness by identifying pupils and departments that performed less well than expected and devising ways of helping such pupils in the future.

While taxpayers have a right to know if schools are using their cash effectively, they and schools also have a right to expect that the Government and its civil servants use valid and reliable methods of measuring that effectiveness.

Graham Hewitt is a part-time consultant working with a school to develop a program for target-setting and tracking and monitoring pupil progress. For more information visit

Log in or register for FREE to continue reading.

It only takes a moment and you'll get access to more news, plus courses, jobs and teaching resources tailored to you