Robert Coe, professor in the School of Education and director of the Centre for Evaluation and Monitoring (CEM) at Durham University, writes:
Most questions in education, especially important questions, do not have simple answers. But here is one that does: What proportion of schools need to improve? Answer: 100%: every single one.
The evidence from research has been clear for many years; the quality of learning experienced by students depends more on individual teachers than it does on schools, and almost all schools have a mixture of quality. That means if you focus improvement attempts only on ‘bad’ schools, you will miss a lot of opportunities to improve: most bad teaching is not in bad schools.
In England, the government’s recent response
to the consultation on secondary accountability contains much that is welcome, but it continues to focus on the school as the unit of judgement, pressure and change, and does not address the difficulties of interpreting accountability data.
Most people who have experience and expertise in interpreting and using assessment data understand that the facts do not speak for themselves. There is no formula that will pick out bad teaching, no league table that will capture true quality. The headline figures may raise questions, but answers come only from drilling down into the data, asking more questions and seeking further evidence to corroborate. Accountability should certainly be based on data, but data needs to be interrogated and interpreted. In short, data should support judgement, not replace it.
In England, this vital role of judging quality is rightly the function of Ofsted. However, evaluating performance data is not straightforward and requires a high level of skill and experience. Do Ofsted inspectors consistently have the skills required?
Anecdotally, I know of a considerable number of cases where those being inspected believe their inspectors did not have a good understanding of data. But if you work in an organisation that provides assessment data to schools (as I do) perhaps teachers are disproportionately likely to tell you their grievances. Fundamentally, though, it doesn’t matter how representative these stories are.
My point is a simple one: it shouldn’t be up to critics to prove that Ofsted’s judgements are not sound; the onus should be on Ofsted to demonstrate that the people who interpret data really know what they are doing and that their judgements are sound.
How would they demonstrate this? I’d like to see them do two things.
The first is that Ofsted should define and then implement a clear standard for the skills and knowledge inspectors require. There should be an independently accredited professional standard, with a clear specification for what inspectors need to know, the kinds of training that should be expected and a rigorous assessment to show that the standard has been met. In other words, inspectors should have to pass an exam before they are allowed to inspect.
They might, for example, be presented with some real data from a school and asked to say what it tells them, and with what level of confidence. What questions would they want to ask? What kinds of other evidence would they seek? How would they respond to specific arguments from people in the school or to additional, perhaps contradictory, data?
The second is that Ofsted should build a more transparent, research-based approach to quality assurance of their own judgements. As a researcher, if I want to use ratings or judgements in a research study I have to show that they are not dependent on the individual who generated the rating, the timing, the context or other spurious characteristics of the thing being rated. If the rating I give on a Monday is not the same as the one you give on a Tuesday, we cannot claim either rating is actually about the school (or whatever we claim to be rating).
Establishing the validity of judgements is important enough in research, but much more so if those judgements are going to be used to make crucial decisions about a school or teacher. It seems incomprehensible to me that in the 20 years Ofsted has operated no-one has thought this should be required. And, just to be clear, providing evidence of the validity of judgements shouldn’t be done once in a half-baked study conducted by Ofsted, but should be done rigorously, with independent scrutiny, on an ongoing basis every time a judgement is made. I know many researchers (including me) would be willing to collaborate with Ofsted to support this work.
I believe the latest changes to the accountability system are both substantial and important improvements on what we had before, but it feels as though we have mended the holes on one side of a ship. They needed mending and I am pleased that we have mended them, but unless we address the other half of the problem the only benefit will be that we sink slightly more slowly than we would have done.