How do we close the education research gap?

It isn’t easy for time-pressed teachers to sift through mountains of hard-to-digest research, says Gale Macleod
18th October 2019, 12:03am
How Do We Close The Gap Between Education Research & Classroom Practice?

Share

How do we close the education research gap?

https://www.tes.com/magazine/archived/how-do-we-close-education-research-gap

You are looking forward to the year ahead, maybe with a new class, maybe with a new school. Perhaps you have been given responsibility for developing one area of the curriculum or asked to take the lead on an initiative such as parental engagement. What do you do and where do you go for support and advice?

I expect that, for most people, the answer will include one or more of the following: colleagues, colleagues in other schools, school managers, local authority advisers, #EduTwitter. But what about research? How much of your practice will be underpinned by the latest and most robust research findings? And if it won’t be, why not?

First, it is confession time: I was a teacher for five years, nearly 20 years ago, and I do not remember ever looking for any research to inform my practice. In true karma fashion, I am now a researcher and I suspect that no teachers ever read my research when looking for ways to improve their practice. I live in hope that this is a function of the challenges of making the connections between practice and research in general rather than a commentary on the usefulness (or otherwise) of my work in particular.

There are some obvious answers as to why research often doesn’t influence practice or, in research evaluation terms, “lacks impact”. Teachers already have plenty of work to do, and, while some might see reading research as something that is desirable, it is likely to be a long way down the list of priorities.

If it does ever reach the top of the to-do list, there is the problem of how to find recent relevant research that doesn’t involve handing over large amounts of money. (I suspect it doesn’t help to know that the money doesn’t come back to the authors.) But join me in a thought experiment: let us suspend disbelief for a minute and imagine teachers have plenty of spare time on their hands and research publications are freely and easily available. Would that be enough to entice you to explore what research has to offer?

I suspect that, for many people, the answer is still “no”. Why is this research/practice barrier so impermeable? One factor is that not all academics are skilled at translating their findings into implications for practice, another is that research projects rarely involve practitioners at the beginning of projects in setting the research agenda.

These are things we can try to get better at but, meanwhile, teachers are faced with trying to make sense of a body of research that was famously described in 1996 as “poor value for money, remote from educational practice and often of indifferent quality” (Hargreaves, 1996). These criticisms of educational research, endorsed by Tooley and Darby in 1998, have never quite gone away. From here on, I want to focus on these questions of research quality and proximity to practice because they help us think about how teachers can make sense of the relevance of research to their work.

The crisis of educational research

The first question, then, is this: is educational research any good? To answer, I think it helps to know something about the origin of what became known as the crisis in educational research. Hargreaves made his comments around the time the New Labour government was making demands for “evidence-based policy”. What became clear very quickly was that only certain types of evidence were thought to be good enough to inform policy and the “gold” standard was the kind of research common in medical research, the randomised control trial.

However, the idea that the methodology of medical research (a natural science) can be easily transplanted into education research (a social science) is problematic. There are some who say we ought to use the same methodologies as natural science and others who believe that no science of the social world is possible at all, it’s all just a way for those in power to keep people in their place (or “subjects of the discourse”, as Foucault might have said).

A third perspective is that social science is possible but it needs its own methodological approach because it generates a different kind of knowledge.

The argument from this third position is that we can’t look for generalisable laws (theories) when we try to explain how people (pupils, parents, other teachers) will behave or learn. There are no simple “cause and effect” relationships because how people act depends on what they think is happening and how they interpret the situation.

By contrast, whether an antibiotic works against a bacterium doesn’t depend on what the microbe thinks of the drug. Not only are people complex but the context of the social world is, too - we can’t control every possible thing that might influence an outcome.

The whole point of theory is that it is abstract, context-free and applies in any situation. In other words, by definition, it is distant from the real world of any particular pupil or classroom. But how well little Grace learns her number bonds to 10 depends not just on what (evidence-based) teaching method you use but also on the particular situation: maybe she fell out with her best pal at playtime, perhaps she’s tired because she stayed up late to watch the football, or she may be distracted by the squirrel running up the tree outside - or, possibly, she has decided it’s very cool for girls to be good at maths … or not. And little James sitting next to her will have his own factors influencing his learning. No educational theory is ever going to be able to cover all those eventualities. How could an abstract theory, the kind loved by policymakers, be relevant to the particularities of your school or classroom?

There have been attempts to produce a science of the social world that takes a different approach because it has a different purpose. Research that seeks to understand, not to predict. Research that steers away from trying to provide easy, context-free answers to the question “What works?” and, instead, asks questions such as “Why did what works work?”, “For whom did it work?”, “Who gains from it working?” and “How is it that your carefully designed lesson for 3B went down a treat but with 3A it bombed?”

This kind of research can get under the skin of issues of power and values, not just technical issues. It is usually smaller in scale because it almost always relies on qualitative data - typically, researchers listening to people telling them about their experiences. In the time it takes to conduct one interview with one person, hundreds of people could have filled in an online survey on the same topic. This means there is a trade-off between depth and breadth, and because of the small scale, lack of concrete measures and absence of statistics, questions are asked about the quality of the research.

There is a veneration afforded to large-scale quantitative studies that qualitative researchers can only dream of. People put a lot of faith in numbers. But there is space both for big-picture research that identifies patterns - for example that children growing up in disadvantaged communities tend to have poorer academic outcomes - and the small-scale research that helps us understand what coming to school hungry or having to pull a “sickie” on World Book Day because you can’t afford a costume feels like.

As an aside, it is somewhat ironic that, when holding individual schools accountable for how targeted funding has been spent, funders look for numbers, “hard evidence” and “measurement”, when we know that this kind of approach lends itself to large-scale studies of populations much bigger than one school. The idea of asking teachers, pupils and parents to share how something has influenced them isn’t seen as valid, yet it is precisely this kind of particular, context-specific research that is meaningful.

Judge research on its own terms

The important thing, if you want to use research to inform your practice, is to judge the quality of research on its own terms, starting by getting a sense of what kind of research it is. If a study is seeking to make generalisations then it does matter how many participants there were, and whether the participants were representative and if any correlations were significant. If quantitative measures are being used, how confident are you that they are measuring what they say they are? If researchers test children’s reading using text in size 8 font, are they testing reading or eyesight?

But for other kinds of research, the questions are different. Can you trust it? Does it feel right? Does the researcher give enough information for you to work out if their analysis is credible? Do they write enough about where the research was carried out for you to judge whether the findings are likely to apply to your setting, too?

This question of “transferability” is crucial and there are numerous examples of times when findings from research with one group of people have been over-generalised to other contexts. Recently we’ve seen the original “adverse childhood experiences” research, which was carried out with adults attending an obesity clinic, imported into primary schools. Similarly, Freud’s theories were based on his clinical work, not on analysis of the general population.

Another limit to transferability occurs where the values and priorities of the research context do not align with those of your setting. This highlights the importance of asking why something worked, not just whether it did.

Effectiveness might tell us more about the way something was implemented than what was implemented. If interventions aren’t adequately resourced and supported, they are unlikely to work in any context.

There are other “proxy measures” of the quality of research. None of these is foolproof as quite a lot of (to use a technical term) shonky research finds a home somewhere.

Generally speaking, there is less scrutiny of research published in book form than in a research journal that is peer-reviewed. You can use Google Scholar to search for an article and it will tell you how many times it has been cited by other authors. Check the author’s background and see what else they have written, with whom and who funded it.

If you see research in a newspaper, look for the original study. If you can’t find it, email the author. There is nothing academics like more than the warm feeling they get when someone asks them about their research.

Ultimately, though, reaching a decision about whether the findings from a particular study are likely to be relevant to your pupils, parents and school community is a matter of your expertise; the application of practical wisdom on the basis of all you know about your context. Robust research can inform practice but only when coupled with the exercise of professional judgement.

Dr Gale Macleod is a senior lecturer at the University of Edinburgh’s School of Education

This article originally appeared in the 18 October 2019 issue under the headline “How do we close the research gap?”

You need a Tes subscription to read this article

Subscribe now to read this article and get other subscriber-only content:

  • Unlimited access to all Tes magazine content
  • Exclusive subscriber-only stories
  • Award-winning email newsletters

Already a subscriber? Log in

You need a subscription to read this article

Subscribe now to read this article and get other subscriber-only content, including:

  • Unlimited access to all Tes magazine content
  • Exclusive subscriber-only stories
  • Award-winning email newsletters
Recent
Most read
Most shared