The Beaufort Wind Scale - why we need an observation revolution

24th May 2013 at 16:05


Before technology bled the last drop of romance from navigation, sailors used the Beaufort Wind Force Scale to measure wind speed. Without instruments to accurately measure conditions, mariners resorted to subjective value judgements - ‘rowdy seas’, ‘stiff breeze’ etc - that varied from ship to ship.


In 1805 Admiral Francis Beaufort devised an elegant and beautiful standard which used a clever trick to avoid the ambiguity: he described the effect that each level of wind or storm had on the sail of a man-o-war (the most common ship at the time). This ranged from 0 (dead calm) to 12 (that which no canvas could withstand).


Adjusted, extended and sometimes still used (fans of Radio 4 will be used to catching up with the weather conditions inf the Orkneys) it was a way to gauge something as wild as the wind without codifying it. Rather than bottle lightning, Beaufort simply described it.


I’m reminded of this elegance every time I consider the inelegant way that observations stifle good teaching, when they should be helping to generate it. As a teacher, being formally observed ranks just below self-immolation as an activity of choice. Part of this is because many of us, if we have any shame, are marbled with insecurities: some valid, some imagined. Lesson observations are perfect opportunities to have our deepest professional anxieties stuck on a spike and displayed at the school gates. It is a death by a thousand ticks.


Part of the problem lies with the nature of our job. Teaching is a practice, and while we may be competent practitioners, the fluid and uncertain nature of our work means that success is never guaranteed in every lesson. Nor is the definition of success agreed upon. Nor are the means by which it is obtained. At times it feels more like a tombola than a test-my-strength.


The question of what counts as a successful lesson is as disputed as the name of God, and as inscrutable as Schrödinger's Cat. The problem lies in the concrete coffins of bureaucracy; when any observation criteria is systematised, in order for it to be scaled up to any level of comparative efficiency it also needs to be codified and reified. That may serve perfectly well when what is being judged is amenable to reduction, like testing a steel cog to destruction, or measuring the height of a tree. But when a lesson observation is reduced to a list of necessary and sufficient conditions, it makes the mistake of every administrator - it assumes that there is no more in Heaven and Earth than in their philosophy. I’ve seen scores of lesson observation criteria lists, and all of them are open to dispute.


One problem is that the things that are easy to observe, such as a three-part lesson, or the presence of peer assessment, cannot be correlated easily with what makes a good lesson (and often, such assumptions are based on the dodgiest of dogma and the ropiest of science). Another problem is that criteria are often so distressingly counter-intuitive that they feel like they were designed by the Child Catcher of Vulgaria, eg demonstrable progress in twenty minutes. That’s asking artisans to become bean counters. And when you treat teachers like bean counters, you get beans, counted.


Yet another issue is that observation itself is a skill. For a start, in order to provide meaningful advice the observer has to be, at the very least, as good as the teacher being observed. This is by no means guaranteed if you design observation schedules that substitute seniority of position for teaching ability. Why should it? Staff aren’t promoted in schools on the basis of how well they teach, so why would we expect ranking officers to be better at it than anyone else? This mutton-brained assumption is behind a great deal of post-observation blues, when lambs observe lions. I once saw an outstanding teacher ruined by a senior manager (who couldn’t keep order in a morgue), who told him that he should have used consequence codes in an observed lesson, and was failed accordingly.


And finally, one of the most common mistakes that observers make is to simply judge the lesson by how closely it matches the way they would have done it themselves. This strategy simply exposes teachers to the Hell of Aesthetics. If your style doesn’t match the tastes and prejudices of the observer, you’re stuffed. In my early days I was reprimanded for not using enough role-play, I fuss you not.


This all matters because careers are moulded by such things, and broken too. Observations are a powerful lever to enact change, especially when they are linked to career progression, pay increments, and performance management measures. The thirst for hard data to feed the maw of the evidence machine perverts and vivisects the practice of teaching.


Observations should be based, not on what the teacher is doing, but on what happens to the children as a result. Like old Beaufort, watching his rigging and canvas, the effect was more important than the numbers. John Hattie - hallowed be his data - says as much when he advises observers to focus on what is being learned rather than what is being taught.


And this is true. One thing observations are good at, is correcting error. Just because teaching is a subtle craft doesn’t mean that teachers can do anything. They need to be reminded when they are doing things which actively discourage learning - such as allowing or tolerating misbehaviour - and then, more importantly, not scolded or keel hauled for their weakness, but trained and supported past it. High stakes observations help no one; they turn what should be beautiful opportunity to learn and train into a gauntlet.


Who watches the watchmen? Who cares? I’m watching what the kids learn.