RCTs in education ‘uninformative’ and ‘expensive’

Randomised controlled trials often tell us little about what works in education, according to an academic
25th September 2019, 4:54pm

Share

RCTs in education ‘uninformative’ and ‘expensive’

https://www.tes.com/magazine/archive/rcts-education-uninformative-and-expensive
Research

Randomised controlled trials in schools are often “uninformative” and “expensive” according to an academic researcher.

Dr Hugues Lortie-Forgues researched RCTs carried out by the Education Endowment Fund (EEF) and found that “only 23 per cent of these trials [showed] statistical significance”.


Background: Why RCTs are effective for education research

Profile: Meet the educational research chief who is in awe of schools

100 years of the RCT: Celebrating the centenary of the education RCT


The lecturer in education at the University of York was speaking at the centenary for the use of randomised controlled trials (RCTs) in education, held this week at the National Foundation for Educational Research.

RCTs refer to when two similar, randomly selected groups of people are used to test the efficacy of an intervention. They are often used in medicine, where they are seen as the “gold standard” for effective research. One group is given a particular medicine, while the control group is given a placebo or no intervention at all.

In education, this might involve two groups of pupils where one group receives a particular intervention - for example, access to a computerised literacy programme - and the other does not.

However, Dr Lortie-Forgues said these trials are often both expensive and uninformative.

His research, published earlier this year, detailed how the mean effect size of educational RCTs carried out by the EEF is 0.06.

The effect size measures the strength of the relationship between two variables - an effect size close to one would suggest a strong effect; an effect size closer to zero suggests a weak relationship.

Dr Lortie-Forgues also said that the large confidence levels of many RCTs meant they were not informative.

“A lot of our trials are uninformative because the confidence level is so wide we cannot tell whether the intervention has worked or not worked, so after the trial, we are just not sure if we should use the intervention or not,” he said.

A wider confidence level in a study means that researchers can be less certain of their results.

However, Sir Iain Chalmers, a British health services researcher, said that when trialling new interventions a small mean effect size was to be expected, and that this meant the RCTs were being carried out ethically.

Sir Iain said these results were the “ethical manifestation of people looking at uncertain things” and were to be expected “if people go into trials genuinely uncertain about which of two alternatives are going to prove superior”.

He cautioned against drawing conclusions from averages that ignored outliers in the data.

“The trouble is, if you ignore those tails [results on either side of the main part of a distribution curve] and the possibility that there are harmful or beneficial standards there - which may have big implications if the trials were large enough - I think you give a misleading message that there aren’t larger effects than the average that you’ve shown.

“I come from the medical field, and when we have ignored tails and have not gone onto do large trials, we have missed very important adverse effects.”

Stephen Fraser, deputy chief executive of the EEF, said: “The EEF is committed to publishing independent evaluations of every trial we fund, whatever the result.

“All well-run trials produce valuable new evidence. Sometimes, that new evidence shows that programmes and practices which have been claimed to be effective at boosting attainment actually aren’t. For example, our trial of structured teaching observation showed no impact on pupil results.

“In most cases, EEF trials represent the largest and most independent evaluations of the programmes to date. The fact that under such conditions the interventions are not shown to be substantially better than business as usual gives us important knowledge - about previous claims of impact, about the level of caution schools should exercise if expecting large effects from them, and about the difficulty of improving on what most schools achieve day to day.

“This knowledge is useful to support senior leaders in their decision making, as it helps them avoid wasting scarce time and money where it’s unlikely to make much difference.

“We have always said that individual studies have their use, but the most useful evidence for teachers comes from looking across a range of studies and combining this with professional judgement. This is the approach we take with the Teaching and Learning Toolkit and all of our individual studies will feed into this.”

Want to keep reading for free?

Register with Tes and you can read two free articles every month plus you'll have access to our range of award-winning newsletters.

Keep reading for just £1 per month

You've reached your limit of free articles this month. Subscribe for £1 per month for three months and get:

  • Unlimited access to all Tes magazine content
  • Exclusive subscriber-only stories
  • Award-winning email newsletters
Recent
Most read
Most shared