Statistics ... or just damned lies?

12th May 2006, 1:00am
How do you define a good school? If you believe what heads are telling us about Ofsted verdicts, inspectors believe this elusive quality can be captured in a single number between 1 and 100.

Finish with a contextual value added ranking between 1 and 5, and you are likely to be judged outstanding. Between 5 and 50, the chances are you are good. Between 50 and 90, and you will emerge as “satisfactory”. Anything above 90 and a failing judgement beckons.

CVA looks like the holy grail for those who have been trying to come up with a statistic to judge, objectively, how good a school’s exam results are.

The problem is, its detractors claim, that no measure can ever provide an unbiased guide to the quality of schooling. The suggested objectivity is, therefore, spurious.

Every measurement system is based on assumptions, some of which are contentious.

For example, CVA assumes, controversially, that General National Vocational Qualifications are worth four GCSEs.

It also uses a particular method for calculating the level of deprivation facing schools, which is slightly different from that being used by the alternative Fischer Family Trust model.

And other factors, such as the use of out-of-school tutors by parents to improve pupils’ performance, are not taken account of by CVA but can also influence a school’s exam results.

Laying great weight on CVA also creates perverse incentives, as schools are rewarded for anti-educational behaviour such as excluding pupils who are unlikely to do well in exams.

All of which suggests that, if a complex measure such as this is to be used, it should be handled carefully by inspectors, as one piece of evidence among many. Yet accounts from heads suggest this is not happening.

With CVA being launched to coincide with a new inspection regime giving less time for classroom observation, more emphasis than ever is being placed on the data, where the OfstedDfES version of CVA is king.

It is not clear why this version of CVA is any better than its rival. And the two models’ complexity makes it almost impossible for schools, or inspectors, to work out why some schools fare better on one measure than another.

The way CVA is allegedly being used also suggests an authoritarian, not-to-be-contested view of what matters in education, which is ironic given that the new inspection system was sold as being more user-friendly for schools.

Is this an ideal system? It hardly looks like it.

* warwick.mansell@tes.co.uk