Skip to main content

‘Too many educationalists take the unforgiving line that all education research should be like medical trials’

Why we should stop looking to pharmaceutical and psychological studies as a model and instead find inspiration in anthropology

News article image

Tom Bennett recently indicted "learning styles" as illustrating the empirical poverty of many educational fads. Bennett has consistently championed the need to underpin theory with "good" (robust and replicable) research. But are we in danger of defining research in such a restrictive way that, with the best will in the world, initiatives and innovations get choked off and teachers run scared from acting on experience and intuition?

We mustn’t throw students in front of fast-moving bandwagons. Teaching and learning strategies need to be grounded in knowledge of what works – knowledge that comes from research and reflection. The question, though, is what do we mean by research?

A growing number of educationalists take an unforgiving line on what constitutes proper research. That line is taken, verbatim, from a positivist version of science, wherein there are only two types of meaningful statements: those that are true by definition (eg, mathematical proofs) and those that can be falsified. Anything else is hot air and "bad" science.

The falsifiability bar is raised sky-high when propositions have to be tested by empirical studies based on huge numbers of individuals, subject to randomised controlled trials. Sure, for a proposition to be accepted we need confidence that its findings are generalisable. If you try something out successfully on a single class, how do you know it’ll work with other groups, especially if you haven’t also established what happens without the intervention?

Pouring scorn on half-baked ideas swallowed whole from pop psychology is one thing. Making the entry barrier to research too high for most teachers to reach compromises the commendable drive for more practitioners to engage in action research. The US initiative What Works Clearinghouse found that all but 10 per cent of education impact studies fail to meet narrow definitions of (positivist) science. Does this mean that the other 90 per cent are worthless?

The narrow view of "good" science is based on the conviction that educational research should be more like medical trials. But there are dangers to conflating a critical, research-informed approach with a reductionist assumption that positivist science is the only science worth the name.

Large-scale studies are useful precisely because they seek to hold complicating factors constant and effectively flatten an uneven terrain. But they are not gospel. In seeking to simplify, they take a narrow view of student outcomes and teacher effectiveness. Large-scale education studies can be criticised for ignoring background and context – often only school effects are considered; and even then a single "outcome" of schooling – ie, achievement in tests. This reductionism unintentionally underlines the mantra that only what can be counted, counts.

Martyn Hammersley observed that teaching is practical, but not technical: it cannot be based on research knowledge, since it depends on a combination of experience, wisdom, local knowledge and judgement – shades of Richard Sennett’s The Craftsman.

But let’s not write off "research knowledge". Let’s just agree that robust research can be of different methodological stripes. Perhaps the best analogy for educational research should be, not medical or psychological trials, but the embedded enquiries of anthropologists.

Dr Kevin Stannard is the director of innovation and learning at the Girls' Day School Trust. He tweets at @KevinStannard1

Want to keep up with the latest education news and opinion? Follow TES on Twitter and like TES on Facebook

Log in or register for FREE to continue reading.

It only takes a moment and you'll get access to more news, plus courses, jobs and teaching resources tailored to you