There is much to welcome in the recently-published document outlining changes to the Framework for Inspection of schools and pre-school centres by HMIE, not least the tone of both the document and the process by which it was produced.
The new brooms of Bill Maxwell and Michael Russell have been busy, and to good effect. There is clear intention to work alongside schools and their communities and to recognise flaws in the previous system. The publication of the consultation research adds weight to the proposals, which are seen to emerge from the exercise. Particularly welcome is the inclusion in HMIE's response of a nominated member of the teaching staff on the inspection team. This will contribute significantly to good communication and to a sense of collaboration (inspection with, not inspection of).
Those working in schools have good reason, however, to be wary. The new framework says little or nothing on two important areas at the heart of the ongoing problems with the inspection process it seeks to address: the use of a comparative linear scale of judgment and the evidence to be used to support judgment.
Over the past 20 years, the inspection process became increasingly narrowly focused on trying to provide consistent judgments of the overall quality of a school, an essential component of the market model of schooling that dominated UK government thinking in the 1980s and '90s. Reports, and the inspections on which they were based, became increasingly reductionist: every aspect of the process was fitted towards finding or confirming where a school sat on the linear scale of quality, originally with four points, more recently with six - from unsatisfactory to excellent.
A linear scale could never capture the range and quality of the different complex outputs of a school community. In practice, parents in Scotland used a much wider range of local information to support their choices. However, even after devolution, successive Scottish governments supported HMIE's continued development of this approach. In secondary schools, in particular, examination results were a dominant factor in the judgment. Schools with poorer examination results, which tend to be located in areas with less social capital, are statistically more likely to be judged further down the six-point scale than schools in socially advantaged areas.
This has a double whammy effect: staff in such schools often go many extra miles with less-motivated pupils and so might reasonably feel that the educational work that they do is both better and more important for the health of our society, yet they are judged to be worse professionally than colleagues working in more educationally-favoured environments. Nothing was more demotivating or demoralising for hard-working staff in difficult situations than for their work to be summed up as "adequate" or, more recently, "satisfactory". Last month's TESS (25 February) reported that the linear scale may be scrapped. The sooner this is done, the better the new framework will achieve its ambitions.
The second major area of concern is the evidence base for evaluation. The consultation report makes clear the concerns of respondents that the new approach "relies on good self-evaluation and authority quality assurance". These concerns have real foundations. Examination results have a place in any good quality assurance process, but HMIE, because of its recent obsession with making comparative judgments, has not developed a robust methodology for sorting out how to factor into its statistical database the complex socio-economic factors involved. These factors all militate against definitive judgments of "good" or "weak".
Scotland, moreover, has 32 local authorities that vary widely in their ability to provide appropriate support and challenge to their schools, even more so following recent budget cuts. Different local authorities adopt different approaches to moderating the self-evaluative activity of their schools, depending on the quality of educational thinking and the resources available within the authority.
The consultation research further reports concerns over "LA input - consistency, need for a national assessment tool". School A, working within a local authority with a well-developed, consistent, centrally- supported system for gathering information from schools, uses this information to support its own self-evaluation. It is much better placed to meet the expectations of HMIE than School B, where the local authority lacks such capacity, either in personnel or systems, or diverts the energy of the school into corporate quality assurance processes that have limited educational focus or do not tie into national inspection requirements.
These are just two examples of a major current limitation in the inspection process which is, unfortunately, not addressed at all in the new framework. HMIE only shows an interest in the outputs of schools and evaluates these without reference to the inputs, such as the supportive capacity of the local authority or the social capital of the school's catchment. Inputs at local authority level that would contribute to a "balanced scorecard" include budgets, accommodation, staffing and recruitment policies and staff development capacity. Given the great inconsistencies across Scotland's 32 local authorities in these areas, a "national assessment tool", assessing inputs as well as outputs, would be a significant step forward.
The tone is greatly improved, but further work is required to ensure the system will deliver for Scotland's schools and their pupils.
Danny Murphy, Education adviser.
Danny Murphy is a former headteacher in Scotland, working this year as a VSO technical adviser with the Ministry of Education in Cambodia.