How far is there a level playing field for school improvement? In one school a teacher told us: "The old guard form a critical mass. They're a group of piss-takers who block change and the head is caught between them." After a brief but promising start, the drive to improve the school stalled.
In another outwardly similar school the deputy said: "The turnaround was very easy; a group of staff knew what was wrong and how to fix it; they just wanted somebody to help them." The momentum was sustained for four years.
Our research looked at 12 schools. All accepted that they had to change, but some were making more progress than others. What helped or hindered them?
How you define "improvement" makes a difference. If, for example, a school focuses simply on "raw" results, then "improving" the intake can become a priority - but here one school's gain is, of course, likely to be another's loss.
We developed a more satisfactory alternative - the first value-added approach to judging school improvement. The schools that particularly interested us were the ones which increased the amounts of "added value" every year for successive cohorts of similar pupils.
Two notable findings emerged. It was rare for a school to secure year-on-year improvements over a five-year period; three or four years, followed by a dip or a plateau, was far more common.
Second, consistency matters. Schools which had found ways of improving their pupils' performance by an average of one grade in one subject per pupil per year were more likely to sustain improvement.
It was not unusual for schools to have launched up to a dozen improvement initiatives over five years; managing large numbers of changes at the same time proved difficult and often took its toll. There were numerous reports of "improvement fatigue".
Changes in several areas seemed to be correlated with improvements in schools' effectiveness. Four were particularly important. "Rapidly improving" schools:
* played the "exam game", often by entering pupils for more exams, identifying pupils at the "borderline" for extra support, providing additional teaching and revision opportunities and reviewing their choice of exam boards. Such policies secured some short-term "rewards".
* gave particular attention to pupil behaviour in class and ensured homework policies were more consistently applied.
* begun to tackle the processes of teaching and learning at classroom level by encouraging discussion and enquiry among teachers, fostering collaborative work as a means of sharing good practice and developing the use of classroom observation as part of the appraisal process. As one teacher put it: "In this school it's OK to talk about teaching."
* gave more responsibilities to pupils by helping them to manage the learning process themselves, using pupil views more actively and making greater efforts to involve them in their schools. "It's the pupils' school as much as the staff's," one deputy said.
The majority of schools had tackled at least one of these four areas (usually the first) over a five-year period. The rapid improvers, however, stood out for their progress on three fronts: the first, the second and either the third or the fourth. None of the schools had yet worked on both.
Everyone involved in school improvement agrees that leadership matters. But how can you tell if it is effective? Our answer was to ask the staff. They told us.
Only in those schools where a substantial minority of the staff were talking about changes to the quality of their own classroom teaching was rapid improvement taking place. If they weren't talking about it, we concluded, it wasn't working.
John Gray works at Homerton College, Cambridge. The Open University Press published his book on "Improving Schools" (co-authored with Hopkins, Reynolds, Wilcox, Farrell and Jesson) last week