Running the rule over the experts

Knowing if a pedagogical study is reliable enough to be used in the classroom can be problematic, finds Chris Parr, although there are some tell-tale signs
3rd August 2018, 12:00am
Magazine Article Image

Share

Running the rule over the experts

https://www.tes.com/magazine/archived/running-rule-over-experts

Not all research is created equal. Academia is littered with examples of previously respected theories being disproven and even ridiculed by subsequent scholars.

Perhaps the most high-profile example is the disgraced former doctor Andrew Wakefield’s fraudulent 1998 research paper that falsely claimed a link between the MMR vaccine and autism. It was published in The Lancet, a long-established and prestigious, peer-reviewed medical journal. The journal’s editor, Richard Horton, subsequently admitted that the findings in the paper were simply “false”.

In the field of education, cases of poor-quality research abound. As Brian Nosek, professor in the department of psychology at the University of Virginia, and executive director of the institution’s Center for Open Science, told Tes in an interview earlier this year: “Education has all of the features that make doing robust, reliable, reproducible research challenging.”

He singled out the very culture of academia - which often judges scholars on the number of papers they get published - as being one of the main issues.

“Those pressures are real, and the competition is heavy, so researchers are effectively motivated to maximise the ‘publishability’ of their results, even at the cost of their credibility.”

The victims here are schoolteachers and, by extension, their pupils. If a well-meaning primary school head is mistakenly pushing for a particular approach to phonics teaching based on a research paper that was produced only so that its author could get a promotion, then the outcomes for that school could be jeopardised.

Sift happens

How, then, can teachers and school leaders who wish to incorporate pedagogical research into their classroom practice be sure that the information they are accessing is reliable?

“This is a real problem, as many teachers do not know where to look to find reliable, robust evidence,” says Julie Watson, memory and metacognition lead at Huntington School, a comprehensive secondary in York and part of the Research Schools Network operated by the Education Endowment Foundation and Institute for Effective Education.

“Teachers need to be very wary when they look at pedagogical research,” she says, particularly given that many are “already struggling to manage their workloads” and cannot afford to dedicate too much time to verifying research claims.

While it may be difficult to know for sure whether a piece of research is reliable or not, there are a number of clues to look out for. Tim van der Zee, an expert in educational and cognitive psychology based at Leiden University in the Netherlands, says that one of the first things teachers should check is the sample size - the number of people who took part in a study.

“This can give a rough estimate of the informational value of a study,” he says.

“To give you an example, if you just want to know a correlation between one thing and another - anything, it doesn’t really matter - then you tend to need hundreds of people just to estimate a single relationship. Even then you don’t even know what drives the correlation, if it is real, if it is causal, or if it is fake.

“You need hundreds of people just to give a somewhat reliable estimate, but in education research we often see studies with just 20 or 30 people from which a whole bunch of correlations have been calculated. People then interpret these correlations as causation, even though everybody knows they shouldn’t.”

Use your common sense

Charles Hulme, professor of psychology and education at the University of Oxford, says there is another, arguably more simple method that teachers can apply when assessing a paper: the common sense test. They should always ask themselves whether the theory under discussion sounds reasonable, he says.

“Always ask, ‘Is this plausible?’,” he says. “For example, is it likely that smoking cigarettes causes lung cancer? Yes. Is it likely that eating bananas will cure lung cancer? No. Think in a sophisticated way about causes. After that, think about effect sizes: if smoking increases the risk of lung cancer, ask ‘by how much?’”

The problem of assessing the reliability of research is one that academia has grappled with for years. One issue is the sheer volume of research journals. To put it in perspective, Elsevier’s Scopus database contains information on more than 36,000 academic journals produced by more than 11,500 publishers. So how can teachers be sure which journals are reliable - never mind which of the thousands of papers they have published are?

One clue is a journal’s impact factor. This is calculated by looking at the average number of citations to recent articles published within a journal, and is used in academic circles as a proxy for a journal’s reputational standing. However, a publication’s high impact factor is no guarantee of quality.

“If a journal has a higher impact factor, then it is supposed to be better,” explains van der Zee. “But it has been shown that, broadly speaking, impact factors are negatively correlated with quality of research. It is a very weak negative correlation but it is certainly not a positive one.”

Good journals “obviously publish good articles, but they also publish very low-quality articles”, says van der Zee. He warns teachers to be wary of articles that appear to be backed up by multiple studies which, on closer inspection, turn out to be based on questionable or unreliable data.

“Take learning styles as an example,” he says, referring to the idea that different students are better suited to different learning styles: visual, aural, kinesthetic and so on. “There is a lot of evidence that it has no impact whatsoever. It is such a popular idea, though, that people still cite it. But if you follow the citations, the articles [to which the citations refer] don’t provide any evidence.

“This makes it very hard for teachers to get an indication of value, because you could probably cite 100 studies saying that learning styles is a thing, none of which actually provide sufficient evidence to prove it.” Van der Zee calls the phenomenon of questionable research citing evidence-free studies a “bubble” that “ultimately bursts”.

Another bubble teachers should be aware of is the blogosphere, says Megan Dixon, director of literacy at the Aspire Research School in Cheshire. Frequently, complex research is condensed into short, accessible blog posts - or even tweets - that fail to consider many of the nuances associated with the original publication.

“It is important to read further around topics,” she says, “but be wary of clicking on a blog that takes you to another blog, which takes you to someone else’s anecdotal opinion piece. That is not research. Really explore the original research, explore where it has been published. Peer review isn’t perfect but there is at least some sign that someone else has critically evaluated it.”

Is everything as it seems?

As such, Dixon encourages staff at her school to approach research summaries found in blogs and on social media with scepticism.

“All the nuance of the findings is lost and watered down. You should always have in the back of your mind that if it sounds too good to be true, it probably is.”

That’s not to say teachers should expect everything to be not as it seems. Dixon advises teachers in her school of some sources that are generally more reliable than others. “The Education Endowment Foundation and the Institute for Effective Education are a good source of accessible research, and university blogs are a reasonably trusted source. Approach all of these things with scepticism as well, but they are good places to start,” she advises.

While it is certainly true that many of the problems with education research lie on the side of the researchers, teachers also have a responsibility to ensure they are interpreting findings responsibility. Some, Dixon says, are always on the lookout for an easy, quick-fix solution - and it is easy to understand why.

“Teachers are under such pressure to find solutions that will instantaneously improve their outcomes, but it doesn’t work like that,” she says. “There are no magic bullets.”

One of the things that teachers can often overlook, says van der Zee, is the precise make-up of an academic study. If teachers are hoping to recreate the positive results of a study in their own classroom, then it is vital for them to be operating in circumstances that are directly comparable to those in which the research was carried out.

“One general guideline to keep in mind when you are reading an article is that you have to put in a lot of effort to make sure that you focus on what was actually tested,” he says. “People have this habit of very easily generalising when an intervention might have been tried once, in a very specific situation, with very specific people.”

There is a temptation among researchers to say “we tried this intervention and it worked” but “that is not how we should think about it”, van der Zee says.

“It doesn’t ‘work’ or ‘not work’,” he adds. “We only know it worked in this specific situation, with these specific measures and these specific students.”

Keep a critical eye

Dixon agrees that it can be tempting for teachers to overlook the nuances of research.

“If a report tells you that eight out of 10 students preferred a particular reading scheme, then you need to find out more about the students,” she says. “How did you choose them; are those students anything like the pupils that you are working with in your school? Maybe it was a study of male students and you are teaching in a girls’ school? Always be aware of that kind of perspective.”

Even when research has got a solid evidence base, a critical eye is still needed, Dixon adds.

“There is a temptation to look at it and say, ‘I don’t like that bit so I won’t do that bit’; or, ‘My students won’t like that bit so I am not doing it’. You can’t then complain when it doesn’t work. How can you expect it to if you haven’t done it in the way the study said?”

With so many challenges, it is easy to become cynical about the value of educational research generally, and particularly papers that relate directly to best classroom practice.

However, Christian Bokhove, a former teacher and associate professor in mathematics education at the University of Southampton, believes that there is “a lot of solid work around”.

“I personally don’t like the caricature that education research in general is poor, because people are tarring everyone with the same brush,” he says.

In his 2018 study This is the new myth, which looks at how to prevent research myths from being viewed as facts, Bokhove details a number of practical steps that people can take when assessing scholarly articles.

“Firstly, I would recommend trying to follow up sources as much as possible,” he writes, while acknowledging this is “of course a very time-consuming affair”.

Avoiding any personal biases is also key. “Perhaps refraining from too firm a position until you feel you have reviewed a fair amount of material, from different actors, might be a good strategy,” he writes.

“I would recommend that we are mindful of oversimplifications, too,” he continues. “I completely understand that providing a multitude of pages to describe the complexities of an educational phenomenon is not helpful for practitioners. However, the fact some overcomplicate things does not mean that ‘simple is best’ either.”

Dixon agrees that there is good research out there - although she believes that those looking to find it should not fear guidance that can be boiled down and communicated “simply, precisely and clearly”.

“If it can’t be simplified, then you need to ask why there is so much jargon,” she says.

In keeping with this spirit, Dixon’s own advice to teachers when reading a research paper is very easy to follow. “First of all, ask yourself if you are convinced by it. Is it creditable, is it biased?” she says. “If you are happy, then ask, is it relevant to your school - and can you afford to do it?

“Finally, ask if the research is applicable to your context. If you are happy on all those fronts, then go ahead - but proceed with caution and nuance, and be clear about what outcomes you want to see. If it isn’t doing what you want, then be prepared to reconsider.”

Chris Parr is a freelance writer

You need a Tes subscription to read this article

Subscribe now to read this article and get other subscriber-only content:

  • Unlimited access to all Tes magazine content
  • Exclusive subscriber-only stories
  • Award-winning email newsletters

Already a subscriber? Log in

You need a subscription to read this article

Subscribe now to read this article and get other subscriber-only content, including:

  • Unlimited access to all Tes magazine content
  • Exclusive subscriber-only stories
  • Award-winning email newsletters
Recent
Most read
Most shared