Skip to main content

The importance of finding the success in failure

Education is obsessed with success. We scour the processes of successful systems, schools and teachers in the belief that this will bring us success, too. But Nick Rose argues that if we never look at failure in the same way, if we never analyse when things have gone wrong, we will never truly know what success looks like

News article image

Education is obsessed with success. We scour the processes of successful systems, schools and teachers in the belief that this will bring us success, too. But Nick Rose argues that if we never look at failure in the same way, if we never analyse when things have gone wrong, we will never truly know what success looks like

During the Second World War, the survival rates of US airforce bomber pilots became a particular concern: the National Defense Research Committee put the odds of getting safely back to base from some missions as comparable to winning a coin toss.

It wasn’t practical to add extra armour to a whole plane, so the committee decided to see if there was a pattern in the parts of the planes that were being hit and then add extra armour to those sections.

Sounds reasonable, you might think.

But one of the mathematicians involved, Abraham Wald, spotted a subtle flaw: they were only looking at the successful examples – the planes that had survived. The damaged areas were the strongest or least vital parts of the planes – Wald argued – because the aircraft had all survived being shot up in these locations. The planes that didn’t make it back – almost impossible to examine, for obvious reasons – were more likely brought down by damage to other locations.

This case illustrates “survivorship bias” – the error of analysing only successful examples to find the secrets to success.

  • This was originally published in Tes magazine. To get great content every week, click here

In education, we rarely apply Wald’s simple logic. Rather, we appear obsessed with success: successful systems, successful heads, “outstanding” schools and teachers. These stories are publicised and celebrated, and frequently studied by researchers.

Meanwhile, apparently unsuccessful systems, struggling schools or disappearing heads are written off and quickly forgotten. Their existence is a source of embarrassment, or they are portrayed as bogeymen – “you don’t want to end up like them” – without anyone ever really understanding what happened.

If we truly want a successful education system, we need to change this approach.

Bias towards success

The trouble is, looking to success is such a naturally appealing way of thinking about problems. Wald made his counterintuitive argument more than 70 years ago, but there are still many contemporary examples of his advice being ignored.

Here’s one: each year after GCSE or A-level results, it’s not uncommon for a handful of the usual celebrity suspects to proclaim that they went on to become successful despite not achieving good grades or qualifications at school. The story of Steve Jobs dropping out of college or Lord Sugar arguing that university is a “waste of time” might lead people to believe that hard work, commitment or creativity are far more important than qualifications if you want to enjoy the sort of success that they had.

But Michael Shermer, writing for Scientific American, has pointed out that these sorts of success stories ignore the failure rate of entrepreneurs who might seek to emulate this path. For every Steve Jobs, there are perhaps 100 hard-working, committed creative types who failed to get their start-ups out of the garage.

Another example comes when researchers use meta-analysis to summarise the effect of an intervention. In research, survivorship bias is called the “file drawer effect”. For a range of reasons, research studies that produce positive results – ie, statistically significant results supporting the hypothesis – are more likely to be published than studies that find no effect. If these latter studies aren’t published – we might imagine them hidden away at the bottom of filing cabinets – it creates a real danger that summaries of research are skewed towards a misleading and over-optimistic interpretation of findings.

If you take a look at how we try to improve systems in education, you will see the pockmarks of survivorship bias everywhere.

Talk to teachers who quit

Most readers will be well aware of some of the pressing questions in education at the moment: how do we improve schools? How do we attract more people into teaching? How do we retain great teachers in the schools that need them most?

Our approach to answering these questions overly relies on examining the views of teachers successfully recruited or retained within teaching or examining the schools rated as “outstanding” for clues as to how policymakers and schools leaders might change things for the better.

For example, from surveys or interviews of teachers we might identify that graduates who join the profession aren’t that bothered about the relative rates of pay compared with their non-teaching peers and are much more concerned about having access to subject-specific career and professional development. Therefore, to increase recruitment, shouldn’t we focus on the CPD offer?

And we might conduct surveys of teachers and look at people returning to teaching after a career break, to discover a demand for “flexible working policies”. In the hope that such policies will help to retain teachers, should we focus on creating more part-time posts within education?

Finally, we might scour Ofsted inspection reports to discover that “outstanding” schools have a number of common features – perhaps things like being “meticulous in monitoring children’s learning and development” in early years or being “tenacious in their aim for high standards in teaching and learning” in special schools or having strong “communication and collaboration” among school leaders or the presence of “robust tracking and assessment systems”.

It makes sense, then, to focus on improving all these qualities in less-than-outstanding schools, right?

Well, maybe. But importantly, as the case of Abraham Wald illustrates, maybe not.

Poor imitations

For example, perhaps high-quality CPD is a genuinely important component of teacher retention – but teachers still working in schools are the success stories. So what if we talk to teachers who are leaving the profession: how much of a factor is the issue of CPD in their decisions? Perhaps pay isn’t a priority among people who make the decision to join teaching, but is it an important factor for those who choose to leave it?

Maybe it’s true that teachers who return after a career break value being able to take part-time, flexible hours, but what about the people who don’t come back? Is the issue of being able to sustain a full-time role as a teacher a bigger factor in their decision?

By only asking teachers who join or return to the profession, are we merely counting the bullet holes of the planes that made it home?

And outstanding schools do many things, but how many of them are things that schools in category 3 or 4 don’t do? It’s hard to imagine a school where aiming at high standards of teaching or monitoring of pupil progress aren’t considered a priority. By simply listing features, without acquiring a deeper understanding of the true role they play in success, are we at risk of driving up workload?


Which brings us to an additional, related problem with our obsession with success – one that Richard Feynman, the physicist and sceptic, described as “cargo-cult science” back in the 1970s.

“In the South Seas, there is a cargo cult of people. During the war, they saw aeroplanes land with lots of good materials, and they want the same thing to happen now. So they’ve arranged to make things like runways, to put fires along the sides of the runways, to make a wooden hut for a man to sit in, with two wooden pieces on his head like headphones and bars of bamboo sticking out like antennae – he’s the controller – and they wait for the aeroplanes to land.

“They’re doing everything right. The form is perfect. It looks exactly the way it looked before. But it doesn’t work. No aeroplanes land. So I call these things cargo-cult science, because they follow all the apparent precepts and forms of scientific investigation, but they’re missing something essential, because the planes don’t land.”

It’s easy to understand why we might imitate things that seem distinctive in a success story, but without understanding the true causes of success – and failure – our time, energy and money is often wasted trying to recreate the appearance of that success, rather than the substance.

For example, a school leader might be tempted to look to a nearby “outstanding” school to identify best practice. The reasoning is appealing: that school is successful, so if we do what it does, we’ll enjoy greater success. So, if they introduce time-consuming marking policies or require teachers to regularly submit complex lesson plans or run “mocksteds”, it’s tempting to copy these features regardless of whether they actually contribute anything to the school’s outcomes.

Where a country like Finland or Singapore scores highly in an international test, it’s tempting to try to lift out an aspect of teaching or school structure that appears distinctive and try to recreate it in your home education system.


However, whether that feature genuinely underpins the system’s apparent success is rarely, if ever, certain.

Our flawed fascination with success stories makes it all too easy to slip into this cargo-cult mentality – reproducing the superficial elements of a school or a system that appears to enjoy success, without understanding the deeper causal reasons for that success.

So how do we make a change?

I think we need to examine relative success and failure more closely, to understand their differences and start to generate a real understanding of what causes them.

Success comes about because of luck as well as judgement. Locating a signal among that noise isn’t straightforward. There are positive approaches to this problem in the UK, eg, an increasing use of robust evaluation methods like randomised controlled trials can help (over time) build up a better picture of which features of interventions help and don’t help.

But where we cannot reasonably (or ethically) conduct well-controlled experiments, it is important to examine the differences between models that appear successful and those that aren’t.

The causes of failure

One example of this comes from the US, where Anna Nicotera and David Struit analysed the trajectories of charter schools (the closest equivalent of which in England is probably free schools). Set up to encourage innovation and raise the attainment of poorer students, some charter schools do far better than others at educating their students.

By examining the applications of high-, middle- and low-performing charter schools, the researchers sought to identify the risk factors in applications that might be used to predict those that would fall short.

They found three factors that might represent particular risk: a lack of an identified school leader in applications that proposed self-management; applications that proposed to serve at-risk pupils but did not include sufficient academic supports (eg, intensive small-group instruction or individual tutoring); and charter schools that proposed highly “child-centred” or inquiry-based pedagogies (eg, Montessori, Waldorf, Paideia or experiential programmes).

Innovation and experimentation

The researchers were quick to point out that the presence of one or more of these risk factors didn’t necessarily have a causal relationship with future school performance, and shouldn’t be used to discourage innovation and experimentation with curriculum and pedagogy, but they argued that they can be used as an additional tool to better understand the risks and enhance the decision-making of charter school authorisers.

Obviously, there are differences between the US and UK system, but it seems likely that schools that are judged successful and those that are not do many of the same things. Could we use similar comparison approaches to identify the risk factors that predict which schools will struggle – and thereby have a greater understanding of what support might turn them around more quickly?

Likewise, teachers who remain in the profession and those who leave may identify similar issues. Could we capture, in a more systematic way, the decisions that cause teachers to leave a profession in which they’ve invested so much time and effort and, occasionally, tears – and thereby gain a better understanding of how to retain them?

As teachers, we understand the importance of articulating and modelling success when helping our pupils to learn, but it would be bizarre if we treated their failures the way we treat the failures of schools – treating the pupil as a failure or ignoring why they are struggling.

Instead, we spend time trying to analyse and understand the causes of pupils’ struggles – their prior knowledge, their misconceptions, their educational needs – in order to scaffold their journeys towards success.

I think we should start to see success and failures within education similarly as a process rather than a category. Only then, perhaps, might those “imposters” of triumph and disaster tell us something we can learn from.

Nick Rose is a former leading practitioner for psychology and research and recently worked as a research specialist for Teach First. He tweets @Nick_J_Rose

Want to keep up with the latest education news and opinion? Follow Tes on Twitter and like Tes on Facebook

Log in or register for FREE to continue reading.

It only takes a moment and you'll get access to more news, plus courses, jobs and teaching resources tailored to you