A new consultation paper about value-added performance tables is no more than a starting point for debate, says Harvey Goldstein
The Government set out in last week's White Paper its intention to make a start on publishing value-added tables in some form from next year - but the complex issues involved in such a task have yet to be resolved. The consultative paper on "value-added indicators," recently published by the School Curriculum and Assessment Authority, offers another chance to debate the general issue of publishing school league tables.
In response to mounting criticism of the unfairness of publishing "raw" performance tables of GCSE and A-level exam results, the Government in 1994 asked SCAA to consider contextualising the information so that, at the very least, children's prior achievements could be allowed for in comparing schools. This was a welcome recognition of the complexities involved in making comparisons by using children's performance data - and makes the SCAA paper especially timely, in view of the present Government's commitment to the continued publication of simple "raw" league tables.
The consultative paper deals with primary and secondary schools. It sets out the background of the associated research project, attempts to describe how data can be used to identify added value, and summarises the project findings. It also contains recommendations and topics for debate.
The authors have tried hard to summarise a complex and sometimes technical debate, and to suggest how value-added data might be used by schools themselves. SCAA hopes that the paper will stimulate discussion in schools, and sensibly calls for more training of teachers which would help them use and interpret information on their pupils' performance.
All this is commendable; but, sadly, the consultative paper has important weaknesses which undermine its usefulness.
The main problem is that the paper fails to set out all the issues surrounding the construction and use of value-added measures. It gives little hint of the considerable debate going on about these issues, and in many places it oversimplifies matters and by so doing closes important areas of discussion.
In particular the paper claims that, once the children's prior attainment has been taken into account when comparing schools, other factors such as social background have only a small additional effect and can be ignored. In fact, such factors - including ethnic origin, eligibility for free school meals, gender and type of school - should not be ignored, since they can interact with prior attainment in important ways.
A key finding of school effectiveness research is that schools are not uniformly "effective" or "ineffective" but differentially so for different groups of children, such as those from particular ethnic groups. More importantly, this research shows that many schools are differentially effective for initially low and initially high achieving children.
In a recent analysis of A-level exam results carried out by Sally Thomas and myself in conjunction with the Department for Education and Employment, we showed that it was possible, for each school, to calculate separate value-added scores for the top 10 per cent of attainers at GCSE, and the bottom 10 per cent. Once this had been done, there was only a moderate correlation (0. 6) between these scores.
In other words, schools were being differentially "effective" in relation to achievement at GCSE.
Such detailed information is important for school improvement: it may be misleading to report a single, overall, value-added score which is effectively averaged over all the children attending a school.
To be fair, the SCAA paper does recognise the need to report separately on different school subjects rather than in terms of an overall "school effect", and a forthcoming book by Pam Sammons, Sally Thomas and Peter Mortimore presents new evidence on the importance of this. Of course, the same issues about differential effectiveness will apply to each subject. The other issue given scant attention is that of the reliability of value-added scores. The paper mentions that most schools cannot be separated in any ranking because of the "sampling error" resulting from small numbers of pupils. It recommends that value-added scores be aggregated over three years to minimise this - but fails to point out that even after this has been done many schools still cannot be ranked.
The position is much worse when we look at how school performance changes over time, since time trends are estimated with far less reliability than single year scores. The report also fails to point out that value-added data is out of date by the time it can be computed. For example, using GCSE scores obtained in 1998, adjusting for key stage 2 prior attainment for the same children in 1993, means that the results of the analysis are available for a cohort of children starting at the schools five years previously. But school populations change and it is a moot point whether such data can or should be used to judge current performance. This difficulty is exacerbated if scores are averaged over three or more years.
Finally, there are a number of broader implications of this debate. The paper sensibly recommends that issues such as student mobility (which school should mobile children be assigned to?) should be sorted out before value-added league tables are published. To this list should be added improving the statistical procedures and taking account of the historical nature of performance data. Yet most of these caveats equally apply to "raw" league tables - in addition to the fact that they are not put in context.
If the Government accepts such reservations about value-added tables, then logically it ought to do so for the current tables. Last week's White Paper, however, endorses these without serious reservation and proposes to introduce so-called "improvement" tables whereby the trends in schools' performance over time will be compared.
Unfortunately, in terms of reliability and out-of-dateness the evidence suggests that such tables will be even more suspect than current ones, in addition to the fact that they have no real value-added element.
An informed debate on all these issues is overdue. It is clear that there are potential uses for value-added analysis and several education authorities, in conjunction with schools, are beginning to implement value-added systems.
The SCAA paper is a welcome starting point for debate, but it will only be useful if the broader issues are also discussed. If the Government is to raise standards it needs to come to terms with some difficult issues and avoid oversimplifying the complex realities of children's performance.
Professor Harvey Goldstein works for the mathematical sciences department of London's Institute of Education. The DFEE value-added report can be downloaded from: http:www.ioe.ac.ukhgoldstn**download