The profession of teaching is filled with no end of debates. Setting or mixed ability? Groups or rows? English or literacy? Red or green pen? Some of them are probably central to the improvement of the work we do and worthy of in-depth investigation. And there’s the discussions about pen colours...
Much of the debate rages on because it’s always tricky to get hard-and-fast rules in a field that is so hard to pin down. As any teacher will tell you: the job would be so much easier if it weren’t for the children. Unfortunately, they bring levels of complexity that make it hard to end any discussion with a simple answer.
People are the inherently unreliable element in the education cycle. That’s what leads to so many to feel uncomfortable about testing and assessment. The reduction of what is an incredibly complex business to a single set of numbers seems a step too far. Those who spend their lives responding to the specific and individual needs of human beings can become concerned when they see that complex individual turned into a score.
But I’ve never thought that the answer was to stop doing the tests. Tests and other assessments, despite their flaws, have their place in teaching and learning. We just have to remember that the flaws exist. The same is true of teacher judgements, which also have their failings. Likewise, we need to take these failings into account.
It’s for that reason that I’m pleased to see a particular phrase in the report published today by the Association of School and College Leaders: “sense and accountability”. As part of the group that contributed to the report, I was keen for it to be clear that schools are not averse to assessment or accountability, but to the over-zealous use of what we know is unreliable data.
As the report states: “Data…should always be the start of the conversation, not the conversation itself.”
A new model
That sentiment is in stark contrast to the way accountability too often works at present. At Ofsted, the structure of the Inspection Data Summary Report provides “areas to investigate” in each school. Terminology matters here. The word “investigate” carries a sense of trying to uncover something: to reveal a previously concealed truth.
Rather than investigations every few years, we need to move towards a model where schools and school leaders are having conversations about the quirks thrown up in the data. Sometimes that conversation will be very brief. For a huge number of primary schools (around 1,500 in 2016), the pupil numbers in Year 6 are too low for any statements to be made.
But for another 100, with fewer than 15 pupils – but more than the magic 10 – the statements are based on tiny samples. If you have 12 Year 6 pupils, each one of those pupils is worth 7 percentage points on your attainment data. Two pupils with specific needs that prevent them achieving the expected standard means your school will never be able to reach national averages. Add in a third wayward result and you’re below the floor standard before you ever begin.
Even in large schools, very few have cohorts large enough to make one year’s data set meaningful in its own right. Cohorts differ. Pupils differ. Goodness, at the moment, assessment cycles differ.
But all of that’s OK, so long as we don’t start presuming that any data can tell you what you need to know. If we recognise all those weaknesses in the data, then we can start to have meaningful conversations.
Michael Tidd is headteacher at Medmerry Primary School in West Sussex. He tweets @MichaelT1979