How to ensure your school’s data doesn’t rely on luck

Schools invest a lot of time and energy collecting information to establish which approaches work. But when such data is applied without due care, it’s about as useful for improving outcomes as crossing your fingers or wearing lucky pants, finds Jo Clemmet
1st January 2021, 12:05am
How To Ensure Your School’s Data Doesn’t Rely On Luck
Jo Clemmet


How to ensure your school’s data doesn’t rely on luck

Have you ever noticed the way that some students hang back to be the last to enter an exam hall, while others attempt an inconspicuous speed-walk to be the first through the door - as if the position in which they enter will somehow affect their performance? 

Or, perhaps you always make sure that you wear your "lucky" socks when you have an important presentation to deliver.

Many of us have routines or possessions that we think of as "lucky". Even if we know, logically, that there is no such thing as "luck", that doesn't always stop the impulse to do something that we have always done "just in case", or the desire to seek out ways to control the apparent randomness of life.

Schools are complicated and volatile places, and while many of us would like to believe that luck has no part to play in outcomes, the truth is that plenty of elements of education are completely out of our control. 

As a school leader, this can be difficult to accept. To admit that luck and randomness play a role in outcomes means acknowledging that we're not fully in charge of what happens to our pupils, and this can result in us seeking out practices and routines that give us back some illusion of full control.

You might not find (many) members of the senior leadership team carrying rabbits' feet around, but there are other, more subtle ways that we school leaders fool ourselves into thinking we're in charge of everything.

A prime example is that schools, quite rightly, invest huge amounts of time and effort into finding out what works and
what does not, based on data. 

All schools collect and analyse copious amounts of data to discover who the "best" teachers are, which are the most effective rewards, and which classroom strategies have the biggest impact. Great school leaders have vision, values and well-defined plans, and they also have data to test, prove and justify their approaches. 

But data has its limitations. The use of mathematical data in schools gives the veneer of scientific reliability to the decisions we make as leaders. However, data analysis that identifies patterns but only assumes causal links is, in some ways, little better than relying on luck. 

What's more, overestimating the reliability of data and using it in an unjustly severe or arbitrary fashion creates a tyranny that can severely hamper students' learning and teachers' careers. Instead of illuminating our paths, data can too easily fog our vision and lead us into damaging dead ends.

So, what are some of the common errors that school leaders make when following data-informed approaches? 

1. Hindsight bias 

Hindsight bias is a phenomenon whereby we revise the likelihood of an event happening after it has taken place or exaggerate how far we could have predicted it happening.

Our brains are superbly adapted to spot changes and patterns in complex data, and simplified relationships can be more easily stored in our memory. However, problems arise when we link patterns to the wrong causes, particularly in hindsight. 

We are brilliant at retrofitting convincing explanations to random data. For example, imagine that a teacher introduces a new questioning technique in a particular year. If results are good for that year, the teacher might attribute this to the questioning innovation. But unless they also carry out proper experiments with control groups, it is impossible to ascertain whether the questioning made a difference, or whether results improved because of other factors.

2. Attribution bias 

Attribution bias occurs when we make judgements and assumptions about why someone has behaved in a certain way. 

These judgements may not accurately reflect reality but they help us to make sense of our own or others' behaviour. This bias often leads us to attribute success to our skill and failure to randomness. 

For example, a member of staff who is skilled at writing an appraisal document can easily highlight data that shows them in the best light. Inconvenient data that does not fit the successful narrative can then be blamed on randomness or bad luck. This can make average teachers look good, and good (but less savvy) teachers look average.

3. The paradox of praise

According to psychologist Daniel Kahneman, the belief that praise or criticism affects performance is misguided. Changes in a person's performance can often simply be chalked up to what he calls "regression to the mean" - a return to "normal" performance. To put it simply, interventions may precede improvement in a task - but they might not necessarily cause it.

For instance, if students underperform in a single subject across the board, multiple interventions will be put in place. Following the increased pressure, performance in the subject will (usually) subsequently improve and we will congratulate ourselves. But while the improvement could have been down to the measures we put in place, we could also be seeing a reversion to mean performance. 

Indeed, our interventions could actually have reduced the scale of improvement.

There is no way for us to truly know. 

What can school leaders do to avoid falling into this, and other, traps and make sure that we are not putting too much faith in data that may be less reliable than it seems? Here are some practical suggestions.

  • Do not mistake absence of evidence for evidence of absence. Some of the most effective interventions can't be quantified. Just because the impact of a new coffee machine in the staffroom cannot be easily captured in the data does not mean it is not important.
  • Allow for variability. Set a range of acceptable outcomes to account for randomness and avoid a high-stakes "all or nothing" mindset. If you set a definitive success target, you allow for only two outcomes: success or failure. Make room for a third outcome: "falls within the successful range".
  • Judge by actions as well as results. Supplement your data analysis with other forms of quality assurance. Observing teaching or student work allows a rounded and more reliable picture to emerge, of which numerical data is only one part.
  • Embrace randomness. Make plans and stick to your principles but don't be too rigid. Data is great for generating questions, so pounce on unexpected positive relationships and test them out. You might discover surprising new solutions.

Ultimately, data analysis can sometimes suggest links between causes and effects that do not really exist, in the same way that your lucky socks do not really make you
a great presenter (sorry). 

The veneer of scientific objectivity hides the complexity of what really goes on in schools. Collecting, presenting and describing data is the easy part. Deciding what it does, and does not, reveal is far harder - and getting this part right cannot be left to luck. 

Jo Clemmet is a secondary geography and economics teacher and senior leader

This article originally appeared in the 1 January 2021 issue under the headline "How to ensure your data isn't just a rabbit's foot"

You’ve reached your limit of free articles this month

Register for free to read more

You can read two more articles on Tes for free this month if you register using the button below.

Alternatively, you can subscribe for just £1 per month for the next three months and get:

  • Unlimited access to all Tes magazine content
  • Exclusive subscriber-only articles 
  • Email newsletters

Already registered? Log in

You’ve reached your limit of free articles this month

Subscribe to read more

You can subscribe for just £1 per month for the next three months and get:

  • Unlimited access to all Tes magazine content
  • Exclusive subscriber-only articles 
  • Email newsletters