Is inspection up to scratch?
I have a lot of experience of inspectors. I have served on various national committees, often alongside inspectors, many capable and insightful. I have been inspected in every school I have worked in - three times as head in three schools - and have valued the professional check on my judgments.
In inspection reports, I have been on the end of highly positive and grudgingly positive comment, but have wondered about the credibility of some of those commenting. It always surprised me that very few inspectors have ever run a school.
The new "cuddly" form of inspection makes public dissent even less likely. But, given the strong views about HMIE, with a new head of the inspectorate due to take up post and a new Education Secretary keen to learn from the tragic death of Borders head Irene Hogg, there is no better time for a head to ask: "How good is our inspectorate?"
There are many good things to say, many covered on HMIE's website, which shows much impressive activity. But what of the quality of this activity? Does it fulfil key strategic objectives: to improve standards and quality, build capacity for improvement and generate the right kinds of evidence.
Despite "very good" work in some areas, there are "important weaknesses" and "unsatisfactory" performance in others.
I believe there are three ways in which the inspection system gets it wrong: unreliability, limited evidence and too simplistic an evaluation model.
The system forces inspectors to make judgments about the quality of schools, "good" or "weak", summarised in a table at the end of the inspection report. To a lay reader this looks very precise, but it is not.
There is a shaky use of statistics, as in the following example. To control for different school contexts, each secondary has 20 "comparator" schools measured by "proxy indicators" (such as percentage of free school meals entitlement). Quality of attainment is judged on the aggregate performance of pupils relative to pupils' performance in the comparator schools. A school was near the bottom in all exam performance measures one year, and near the top in all but three the next, since 17 out of 20 comparator schools had changed. Such erratic variations suggest caution: if reliability can only be guaranteed, say +- 5 per cent, it is only at the extremes ("unsatisfactory" or "excellent") that inspectors should be definitive in their judgments.
Are grade descriptions more reliable? The quality indicators (QIs) aim to provide a reliable standard for use by anyone. Each describes two performances: one "very good" and one "weak". But, astonishingly, there are insufficiently-detailed descriptions for the other four grades. "Good" and "satisfactory" are widely used and are the basis of a positive report, yet neither is fully described. Judgments made using the apparently objective "six-point scale" are thus less reliable than the inspectorate claims.
The sample of QIs in an inspection report provide insufficient evidence: 25 QIs are not reported on - for example, "staff sufficiency and recruitment". Any head who has had to manage significant levels of staff absence knows the impact one or two long-term vacancies can have on every aspect of a school's performance.
This raises another issue: is "staff sufficiency" an "input" to a school or an "outcome" of the activities of the school and its staff? Can or should schools be held accountable for factors that result from decisions made elsewhere, nationally (overall teacher training and recruitment strategy) or locally (human resource recruitment policies)? Inspectors, however, show no interest in measuring the inputs to schools, including the social capital of the community, which varied markedly in the three schools I led - a narrow focus on "the performance of school staff" avoids introducing important evidence of inputs.
Moreover, changes in the inspection process have shown increasingly less interest in differences of quality within schools, despite the fact that Graham Donaldson accepts that differences in quality within schools are greater than those between schools. This obsession with attributing all responsibility at the level of the school arises from the "market choice" concept of schooling, developed in the 1980s, which still dominates the inspection process.
In most cases, this model seems to work: most Scottish schools are "good". But, since inspection focuses largely on the outcomes of schooling, and there are few evidence-generating techniques in relation to the inputs, the model offers few explanations of success or failure - other than the actions of the staff in individual schools.
These actions are crucial, but other factors may also be vital. Scotland has longstanding and significant social inequalities directly correlated with educational inequality. Schools in areas of disadvantage are challenged daily to address psycho-social as well as educational issues. A higher proportion of their pupils depend on the school for motivation to learn. Both HMIE and the Organisation for Economic Co-operation and Development identify the lowest-performing pupils, most of whom live in the least advantaged areas, as the most important challenge for Scottish education.
But the HMIE model of improvement does not successfully address this challenge. Its thin model of change and improvement locates the engine for change in individual schools and expects teachers and heads, individually and collectively, to aspire to "best practice" in other places, without analysing the factors which support "best practice" or make it possible.
Teachers have a vital role but cannot accept all the responsibility. In failing to analyse and research the contribution of family, cultural, political and other factors, the HMIE model of improvement is too limited. The most crying need is to engage the pupils who perform least well in this system. HMIE's model offers few answers, so its unsatisfactory performance demands improvement.
Next week: how can inspection be improved?
Danny Murphy is headteacher of Lornshill Academy in Alloa.