The School Curriculum and Assessment Authority working party, chaired by Dr John Marks, has now concluded. Its report, Value-Added Performance Indicators for Schools, has been warmly welcomed by the Secretary of State. She has "firmly committed" herself to the development of robust national measures of the value added schools to children's education.
The advantages to be gained from a VA approach form the foundation for most of the 25 recommendations. The limitations, for example, of using current national curriculum levels for such analyses are pointed out; proposals for better measures of prior attainment are put forward; the case for valuing all levels of pupil performance at GCSE (and not just the higher grade) is made; and the importance of working for school improvement throughout the system is stressed. The era of crude judgments of schools' performances should be drawing to a close.
But how can these ideas be put into practice? Chapter 3 describes the three "simple models for estimating value added" it suggests should be trialled.
Unfortunately this chapter has several marks of being hastily put together. The first hint, however, that some aspects of this report were going to be rather unusual came when I found the "correlation coefficient" referred to by the notation hitherto reserved for the related concept of "variance explained". Such an elementary mistake was, I assumed, a typographical error. But I discovered subsequently that the word-processor had been up to its tricks again - three times in quick succession.
The first of the models involves a graph relating "output academic score" to "input academic score". The initiated may recognise this as the "means-on-means" approach. However in the interests I assume of "simplicity", the reader is spared the knowledge that there has been a (largely critical) debate about this particular approach over the past decade.
Having worked on these issues, I had a pretty good idea what to expect, despite the text being rather sparse. Even I had difficulty, however, understanding what the axes on the graph related to. Both the "input" and "output" scores range from 10 to 40. Ten to 40 units of what?
The report assures us that graphs like these can be "plotted either for individual pupil scores or for scores aggregated at the level of the school". It suggests, however, that "it is much simpler to do the latter and subsequent discussion will assume that this is what has been done". This assumption is adhered to for the next three sentences.
The second model is back-of-the-envelope stuff. "It can be used without any sophisticated mathematical calculations." We are referred to an earlier table which shows the median score in terms of AAS level points for pupils with different average GCSE grades. For those who got an average grade C, for example, it was 10 points; for a B average 20 points; and for an A 33.
The report suggests that the proportions who, for a given average GCSE grade (say a B), achieve particular hurdles (say 20 AAS-level points or more) could be calculated and then summed to provide an "indicator of how well a school is performing".
This would, indeed, be a simple method if pupils' grades averaged out as described. Unfortunately they don't. Between exactly a grade A average and exactly a grade B average there will be a variety of positions from a little worse than an A average to a bit better than a B and so on right down the whole grading scale. The suggestion that this is simple to calculate is based on a misunderstanding of how the relevant table in Chapter 2 was created. To produce scores for whole schools would require a truly massive grid from which to start work - and a computer.
Furthermore, most statisticians would almost certainly advise using the procedures (and especially the "scatterplot") already employed for the first model. So the second model would turn out, in practice, to be not much different from the first except that it was based on pupil-level data.
And what of the third model (ominously referred to as "a tripartite division")? Surprisingly, given the report's claim that three models are being proposed, this turns out not to be a model at all but merely a way of interpreting the results of the first two. As the report declares "it was suggested that whichever model was used it might be best to divide schools" into groups: "those achieving roughly as expected according to national data; those achieving better than expected; and those achieving less well".
There is, of course, another alternative - namely the multi-level model reported in last week's TES. Curiously this internationally-acclaimed approach never really gets considered, despite the fact that its development in several LEAs is commended "as an appropriate way to separate out the degree to which schools affect the progress which their pupils make".
There are at least two good reasons for using it. First, it offers the best and most sensitive estimates of how well schools are performing. And second, boring as it may be, it confirms that most schools most of the time produce exactly the sort of results one would predict from knowledge of their intakes. League tables of the footballing type, as opposed to broad-brush statements about effectiveness, simply cannot be justified.
Two grounds are offered for rejecting the ML approach. One is that it "would be inappropriate, at this stage, to recommend a national individual pupil-level database". In fact, this is completely at variance with the report's earlier recommendations. Has it forgotten that the second of the two models it commended requires such a database? Indeed, the data it cites by way of example are based on one. Given also that the first model, to allow for pupil mobility, is to be confined to pupils who have attended the same school throughout, something similar will be needed here as well.
The second ground is that of "simplicity". The report seems to think that "weighted ordinary least squares regression" with "corrections for errors of estimation" is part of the discourse of citizens on the Clapham omnibus, but that they would have difficulty coping with ML approaches. I suspect they would be perplexed by both but want assurance that the summary judgments they were offered about schools' performances were not only fair and valid but based upon the best procedures experts could devise.
In reality, there is no shortage of statisticians with established expertise in doing ML analyses and the capacity to turn the results into something intelligible. In declaring that it is looking for those with "a proven track-record in value-added work" SCAA seems to be acknowledging this.
In the meantime SCAA staff will need to take any remaining copies of the report home for Christmas - along with bottles of Tippex. It would be unfortunate if word got out that someone at SCAA couldn't tell their "r's" from their "R-squares".
Professor John Gray is Director of Research at Homerton College, Cambridge. For free copies of his report on Value-Added Approaches: Lessons and Challenges please send him an A4 stamped addressed envelope.