Tes talks to...Brian Nosek

The academic and campaigner for greater integrity in research tells Chris Parr about the need for teachers to be circumspect about any study’s reliability before incorporating its findings into their lesson plans
15th June 2018, 12:00am
Magazine Article Image

Share

Tes talks to...Brian Nosek

https://www.tes.com/magazine/archived/tes-talks-tobrian-nosek

The notion that teaching should be research-informed - and that the best teachers use evidence-based techniques in the classroom - is pretty controversial in certain education circles. But it should be even more widely contested, according to Brian Nosek, who has some doubts about how reliable some of the research in education really is.

Nosek is a professor in the department of psychology at the University of Virginia, and executive director and co-founder of Virginia’s Center for Open Science, which works to increase the integrity of research across all disciplines.

Education, he says, is a field particularly susceptible to research problems. “It is easy for schools to jump on to the latest teaching research fad and say, ‘Oh, they found the solution here, this is going to solve everything,’” he says. “But education has all of the features that make doing robust, reliable, reproducible research challenging.”

The “features” Nosek refers to include the very way in which academic research is structured, with the successful publication of papers in prestigious journals being the “main currency of career advancement” for researchers. Education researchers “need to publish their research in order to get jobs, to advance in their careers and to get promotions”, he argues. “Those pressures are real, and the competition is heavy, so researchers are effectively motivated to maximise the ‘publishability’ of their results, even at the cost of their credibility.”

Although this could happen intentionally, since “people do sometimes just make stuff up”, most examples of malpractice in this area are more subtle, he says.

“For example, a researcher might have an idea in advance - they might think that a particular educational intervention is going to be effective. That is often why they are studying that intervention in the first place.”

This, he says, can result in confirmation bias. “If they have flexibility in how they analyse their data to decide if the intervention was successful, then they could choose to publish only the best outcome of many outcomes they observe.”

‘Yeah, it’s a concern’

The statistics about the reliability of education research are concerning. A 2014 study published in Educational Research, a peer-reviewed journal of the American Educational Research Association, analysed papers in the 100 education journals with the best academic reputation. It found that just 0.13 per cent of studies had been replicated - a process whereby procedures are repeated to corroborate results. “It is hard to imagine a figure much lower than that so, yeah, it’s a concern,” says Nosek.

The paper found that where replication had been attempted, only about two-thirds (68 per cent) were successful in reproducing the same results. However, this is not to say that Nosek believes any failed replication is necessarily the death knell for an idea.

“Failing to reproduce a finding doesn’t mean necessarily that the original finding was wrong,” Nosek qualifies. “Of course, that is one possibility, but there are multiple factors that lower reproducibility.

“One is that you simply cannot follow in the description what was done - so it may have been a valid finding, it is just not possible to follow the process.”

A concern here is that if an academic cannot follow the process, it is likely that a teacher won’t be able to either. So how useful is that result?

“Another possible reason [for a failed replication] is that there may be some problem in the process,” Nosek continues. “Maybe the researcher did six studies and only one turned out in the way that they thought was productive, interesting and publishable, so that is the only one that they reported.”

Such selective reporting means that those reading the study do not have access to all of the evidence that was gathered, Nosek says - a potential hazard for teachers looking to incorporate the research into their lessons.

So, how can teachers take steps to help ensure that the research they take on board is actually robust?

“I wish there were an easy answer, but there isn’t,” Nosek says. “My basic message would be that no single study is definitive, no matter how big, or how extensive. Each study is part of an accumulating body of evidence for the phenomenon under investigation.”

The best approach might be for educators to build in their own research when incorporating a particular method, he thinks.

“I like optimism, and I think teachers should approach ideas that they think look promising [with enthusiasm]. But incorporating self-study is wise, so if a teacher is looking to apply an intervention that research suggests might help in their classroom, how can they build in their own evaluation tools? Can they compare the effect of this particular intervention in one term versus the prior term and see if any change happens?”

Building in strategies to assess what is working and not working, accompanied with “some patience and caution…might make for more effective and appropriate application of research in the classroom”, he concludes.

Unfortunately, he says, too much research is “not written in a way that is accessible for translation into classroom practice”. But despite this challenge, “to abdicate the research literature and just do stuff yourself doesn’t actually embrace the good things happening in research literature”.

Grand claims about reforms

Of course, it is not only teachers and educators who look to back up their work with educational research. Education policymakers also make grand claims about their reforms being “rigorous”, “evidence-based”, and “research led”.

The government’s Educational Excellence Everywhere White Paper talks of “fostering a world-leading, evidence-informed teaching profession”, and says that “good, enthusiastic leaders should be able to use…up-to-date evidence to drive up standards”.

Even former education secretary Michael Gove - who famously declared that people had “had enough of experts” while campaigning for the UK to leave the European Union - supported a “more rigorous and evidence-based approach to helping children learn” while in charge of the Department for Education.

This opens up the potential for even more educational research to be subject to bias and malpractice, believes Nosek.

“There are people who have a stake in a particular outcome, so often a group will commission research when they have already decided on their favoured policy and they want to show support for it. That is a conflict of interest,” he says.

He gives the question of single-sex versus mixed-sex classrooms as an example. “Which one is better? There are different groups that have positions about that,” he says.

This can lead to researchers seeking to make decisions - without even realising it - that “make the experiment look more successful than it actually is”.

The Center for Open Science advocates a number of possible solutions to the problem of unreliable research. Chief among them is pre-registration - a practice that requires researchers to write down what their study question is, and how they are going to test and analyse it, before the study begins.

“It is about making a pre-commitment to how their data will inform on the research question that they’re asking,” Nosek explains.

Such practice is already required by law for a large number of clinical trials, in order to mitigate against pharmaceutical companies manipulating research about medical treatments. “It means that you can see if a trial was started and never got reported. You can compare a pre-registered trial with the outcomes that were reported. It is detectable, but currently in most education research you just have no idea.”

Teachers and others in the school community should put pressure on educational researchers to take part in pre-registration schemes, Nosek believes.

“We have a registry where researchers pre-register their research demands. We are seeing rapid scale-up of this, and it happens because a small community gets interested, people see that it has value for the robustness and integrity of the research they are doing, and adoption starts to broaden,” he says.

“There is definitely interest in the education community of these sorts of activities - it is not yet broad adoption, but there are some early adopters pushing the envelope.”

Ultimately, though, Nosek is clear: whether it is a teacher looking to better themselves in the classroom or a politician looking to reform an education system, responsible use of research findings is the best route to success.

“While we have lots of challenges in the evidence being created, it is still the best available strategy. Decisions should be based on actual information rather than intuition, gut or the historical ways we have always done things - ‘what Grandma said’,” he says.

“Ultimately it goes back to the old aphorism about democracy: it’s the worst system, except for all the others. Likewise with research, it is not as reliable as it could be, but it is better than any other way to find reliable information about the world.”


Chris Parr is a freelance journalist

You need a Tes subscription to read this article

Subscribe now to read this article and get other subscriber-only content:

  • Unlimited access to all Tes magazine content
  • Exclusive subscriber-only stories
  • Award-winning email newsletters

Already a subscriber? Log in

You need a subscription to read this article

Subscribe now to read this article and get other subscriber-only content, including:

  • Unlimited access to all Tes magazine content
  • Exclusive subscriber-only stories
  • Award-winning email newsletters
Recent
Most read
Most shared