‘Next to useless’ research risks holding back school reform

Stronger, clearer evidence is needed to drive meaningful policy change, say academic critics
30th June 2017, 12:00am
Magazine Article Image

Share

‘Next to useless’ research risks holding back school reform

https://www.tes.com/magazine/archived/next-useless-research-risks-holding-back-school-reform

Tens of millions of pounds of public money has been spent on improving the quality of education research in the last decade to ensure that vital decisions made by teachers and heads are rooted in evidence.

But academics are now warning that much of this funding is being “frittered away”. They claim that despite some very high-profile moves to strengthen education research, “little has changed in 15 years or more.”

Moreover, they argue that much of the research behind key education policies introduced across England, such as the introduction of phonics, is of poor quality.

Meanwhile, leading education policymakers claim that the world of education research still lacks the necessary capacity and skills to provide the evidence that could make a difference, and is failing to engage with the wider world.

These concerns about the validity of education research have major implications for heads and teachers who increasingly want to base their practice on evidence, the pupils at the receiving end of these decisions and the taxpayers who fund most of the studies.

The alarm has been raised by a group of Durham University academics, who say that most evaluations into teaching methods and classroom practice are “next to useless”.

According to them, studies suffer from missing data and small samples, as well as failing to address the most pressing matters in the world of education. What’s more, the academics claim, the published research is often less than transparent about these data weaknesses.

“It’s not the job of teachers and heads to have to read between the lines - they’re not trained to do that,” argues one of the Durham critics, Professor Stephen Gorard.

In their new book, The Trials of Evidence-based Education, Gorard and co-authors Beng Huat See and Nadia Siddiqui claim that a raft of high-profile interventions, including phonics, computer-based literacy schemes and the “core knowledge” curriculum, have been pushed forward on the basis of weak evidence.

Their conclusions on some of these interventions come in sharp contrast to findings from the publicly-funded Education Endowment Foundation (EEF). Set up in 2011 with a £125 million government grant to fund evaluations of teaching interventions, the foundation was established to help solve the problem of poor education research.

But the Durham team’s book warns of a “dangerous interdependence” between the EEF and its use of external research companies “with an unhealthy reliance on repeat ‘business’” - a charge strongly denied by the foundation.

The need for far stronger evidence in education is also keenly felt by those who have worked at the heart of government, helping to form education policy.

Driving national policy-making

Sam Freedman, Teach First’s executive director for participant impact and delivery and a former policy advisor to Michael Gove when he was education secretary, points out: “There are not many quantitative researchers in education departments in universities.

“There are some brilliant ones, but not that many. Instead of having a statistical background, a lot of university researchers have a sociology background.

“That’s valuable, but the kind of large-scale research that can drive national policy-making tends to come from statisticians and economists. There are not yet enough departments that have people with these skills to produce the level of quantitative research needed.”

Even when work is carried out, researchers often fail to convey why their findings matter, according to another former Westminster education policymaker, who prefers to remain anonymous. And, he says, they can also appear reticent to share their findings.

“Almost no academic reaches out to government. When I was working in government at a senior level, I don’t think I ever had anyone proactively try to share their research with me. When they did, very few could explain it properly or what it meant.”

He lists the EEF, Education Datalab and the UCL Institute of Education as exceptions.

However, Freedman also has reservations about one of the Durham book’s key charges - that interventions such as phonics have been pushed by ministers in the face of flimsy evidence.

“It’s certainly true that there’s not as much evidence for some things as one would like when making policy decisions. But even if it’s not perfect, it can push you towards a particular direction of travel,” he says.

“With phonics, we don’t have gold-plated, 100 per cent evidence, but we have enough to show it’s the right thing to encourage compared to alternatives, whereas for something like summer schools, there’s less strong evidence.”

It is little wonder that some of this evidence is less than gold-plated if - as the book claims - even the experts hired to assess education research regularly fail to highlight problems with its methodology.

The book states: “The peer-review system cannot work while the majority of work remains so poor, because the academics producing such work then form the majority of peers judging the quality of each other’s work.

“Some may not be competent...and some will have conflicts of interest.”

But the EEF argues strongly that the Durham team’s claim over its “interdependent” relationship with research companies is simply false.

Foundation chief executive Sir Kevan Collins points out that the majority of evaluations it funds are carried out by university academics. He adds that there are other reputable researchers, “like the IFS (Institute for Fiscal Studies) - they’re legitimate players”.

Collins also points to the EEF’s use of randomised control trials (RCTs) - of which it has run more than 100 over the past five years - as an important improvement in the quality of education research.

“It’s a huge step from the debate we were seeing previously,” he says.

“We have to keep raising the bar in the quality of evidence, while making sure, at the same time, we don’t get lost in a hole of academia and maintain relevance to practitioners and schools.”

However, Collins concedes that universities do sometimes lack the capacity to carry out these trials. “It’s probably the case that conducting RCTs with educational institutions in universities is an issue,” he says.

And it is this challenge in finding enough university academics to carry out the research needed that Gorard claims creates the interdependence between the EEF and its use of research companies.

So if the EEF were less focused on raising the bar through RCTs, would a greater number of capable researchers come forward to take part?

Becky Francis, director of the UCL Institute of Education, is clear that this approach could improve some research. She wants big funders to adopt a more “flexible” approach when deciding which education research they viewed as sufficiently rigorous.

“I applaud the EEF emphasis on RCTs and they have done a great deal to revitalise experimental educational research, which was very grim in the UK before the EEF,” she says. “But RCTs are only one approach to research.”

The institute is currently looking into the impact of setting and streaming in schools, using a mixture of methodologies including the use of qualitative interviews, which has been “fruitful”, she adds.

But Jo Hutchinson, director of social mobility and vulnerable learners at the Education Policy Institute, highlights the importance of the quality of research, even if it means narrowing the pool of researchers.

She also says research funders need to set certain “ground rules” that “help to ensure that the findings are comparable from one trial to the next”.

This would make it “easier for schools to understand and use the research”.

“This is important if we want to see evidence informing what happens in classrooms, and not just sitting on dusty shelves,” Hutchinson adds.

It is worth remembering that these studies nearly always involve real pupils at real schools. And Freedman says that means they will always involve some compromise.

“Ultimately, education is a social science,” he says. “The [top-rated studies] are really difficult to do - a lot of schools don’t want to particulate in them because they’re busy, or they drop out, or don’t do the intervention right. Education exists in the real world.”


@CharlotteSantry

You need a Tes subscription to read this article

Subscribe now to read this article and get other subscriber-only content:

  • Unlimited access to all Tes magazine content
  • Exclusive subscriber-only stories
  • Award-winning email newsletters

Already a subscriber? Log in

You need a subscription to read this article

Subscribe now to read this article and get other subscriber-only content, including:

  • Unlimited access to all Tes magazine content
  • Exclusive subscriber-only stories
  • Award-winning email newsletters
Recent
Most read
Most shared