Skip to main content

'Ofsted has questions to answer on graded lesson observations'

 

Dr Matt O’Leary, principal lecturer and research fellow in post-compulsory education in CRADLE (Centre for Research and Development in Lifelong Education) at the University of Wolverhampton, writes:

In a recent TES article, Stephen Exley reported that Lorna Fitzjohn, Ofsted’s national director for FE and skills, was surprised at the strength of feeling amongst senior FE management to want to continue the use of graded lesson observations, despite the vast majority of lecturers opposing the practice.

Why this conflict of opinions should come as such a surprise to Ms Fitzjohn is surprising in itself, particularly given that it was clearly highlighted in the largest piece of research ever conducted into the use and impact of lesson observation in the sector only a few months’ ago.

Nevertheless, this divide between the perceptions of senior managers and practitioners regarding the continued use of observation in the sector creates a significant dilemma for Ms Fitzjohn and for Ofsted in terms of its current FE and skills pilot and its subsequent evaluation. But it is a dilemma that Ofsted needs to confront directly and transparently if the evaluation is to retain credibility and the inspectorate is not to be accused of prioritising the views of senior managers over those of the sector’s teaching staff.

Only recently, Lorna Fitzjohn revealed that Ofsted ‘would like to hear from teachers as well as managers in our consultation and during the pilots’. There is no doubt that many FE teachers would welcome the opportunity to contribute to the pilot/consultation, but they remain in the dark as to how to do so. Despite repeated requests for clarification on what the consultation will involve and how practitioners might contribute, information has not been forthcoming from Ofsted.

Unwittingly or not, this lack of clarification could have significant repercussions for the credibility of the pilot’s evaluation as a whole, as it raises a number of questions. What does this then reveal about the sampling strategy and wider methodology used for the consultation/pilot to date? What impact is this likely to have on the validity and reliability of its findings? Or, in other words, how will Ofsted defend itself against allegations of bias both in its data collection and analysis? These are all fundamental issues for anyone involved in carrying out an impact evaluation study to consider.

As a minimum starting point, I would suggest that Ofsted needs to answer some of the following questions. How does Ofsted intend to capture the opposing viewpoints of senior managers and practitioners in its evaluation? What measures will be taken to ensure that the differing opinions of the two groups are reflected in its reporting and decision making? How will the key findings of the evaluation be used to inform Ofsted’s future policy on lesson observations in FE and skills inspections, and will the rationale for this decision making be shared publicly?

You might reasonably expect the answer to some of these questions to have emerged from the schools-based pilot that took place in June of this year, but at present, all we know is that, according to Sir Michael Wilshaw, it ‘proved incredibly popular’.

Despite calls for Mike Cladingbowl, Ofsted’s national director for schools, to share the findings from the schools’ pilot, Ofsted has yet to do so and in the words of Mr Cladingbowl, it has ‘no immediate plans to publish the formal evaluation of the pilot’.

But why not? Surely the findings are important to share with the teaching profession as a whole? Why would you bother to carry out an evaluation in the first place if you didn’t intend to share the findings with the very people it affects? Besides, as a matter of ethical responsibility, aren’t the participants who were involved in the schools’ pilot entitled to know why it ‘proved incredibly popular’ and whether it was popular with everyone involved or specific groups?

Until the findings from the schools’ pilot are shared openly, then the specific rationale for why Ofsted decided to stop grading individual lessons in school inspections will remain unclear. We will never know, for example, how the new ungraded approach compared to the previous graded approach across the different groups involved. We will never know, for example, what some of the challenges and/or areas of (dis)agreement were found to be in adopting an ungraded approach by inspectors.

Until this detailed information is released then all we have to go on is the impressionistic viewpoint that it ‘proved incredibly popular’, which hardly seems to embody the robust and rigorous approach to evaluating evidence that Ofsted prides itself on when conducting inspections. But then again, maybe this reveals a more accurate picture than we realise as to how policy decisions are made?

 

 

Log in or register for FREE to continue reading.

It only takes a moment and you'll get access to more news, plus courses, jobs and teaching resources tailored to you