I had meant to stay in university until retirement. Happily immersed in research and dissemination, I hadn’t considered the “impact” of my work beyond peer review. Then came an epiphany: would anyone actually read my 30,000 words on the transmission and reception of Piagetian theory in UK teacher training over a 30-year period?
I wanted to make an impact on actual students in actual schools. After five years of work, I’d realised this wasn’t going to do it. Abseiling down from the dreaming spires, I decided to engage directly with school practitioners.
Briefly, I became a sort of sole-trader researcher, peddling practical research experience from school to school, talking in enthusiastic terms about “what research could do for you”. I was convinced I could produce insight that was of real use in the classroom.
It’s hard to explain how demoralising it is to have that repeatedly thrown back in your face, albeit legitimately. It became clear to me that academia and education did not share an understanding of what “research” meant. Was it a set of detailed observations of classroom practice? Was it the manipulation of an independent variable to assess the impact of this? Was it drilling down into student data in order to reframe classroom challenges in a meaningful way? Would it involve longitudinal study of the impact of wholesale change or short-term analysis of iterative refinements of classroom teaching?
I realised I could not assume that all educational practitioners regarded research as I did. I sharpened up my act considerably. My answer to the question of what research was remained, very clearly, that it should encompass all the above where appropriate and I stated it more overtly. Teachers began to engage more readily with the ideas. But I also learned that the use of the term “research” within educational discourse was shifting from something that was done by the profession to something that was done to it: a passive workforce receiving the products of research that answered questions others had framed.
This trend seemed to reach its apotheosis with media reports on the March 2013 government paper by Dr Ben Goldacre, Building Evidence into Education. This seemed to offer the education sector a definition of research it could agree on: the randomised controlled trial (RCT). A flurry of social and formal media reported on the model of research that was needed to confirm what worked in education, leading to a heavily précised and often inaccurate account of the paper. According to these reports, the golden bullet should include national trials leading to amalgamated data; national agencies to collect and analyse the data; and the production of definitive answers that could be translated into classroom practice across Britain.
The hows and the whys
The upsurge in RCTs in education is a good thing, with the usual caveats in terms of quality of execution. But the trend towards these trials becoming synonymous with good research worries me greatly. It worries me because of what such a narrow definition would negate. The teaching profession should not only be interested in whether something works, but also in how and why. Similarly, it should not only be interested in hypothesis testing but in generating new questions, too.
As a methodological pluralist, I believe that any definition of educational research should include action research, for many reasons. First, it fully appreciates the power of local context. National, amalgamated data smooths out differences between schools and students in an attempt to make findings more or less applicable to all, but it also makes assumptions about the “average” student. Crucial data on how and why something works must be generated within a framework that’s culturally sensitive to the students involved in the research.
These young people will not necessarily fit into the top-down categories that organise nationally reported exam results. I have worked in many schools where students from estate X and hinterland Y face very different cultural norms and barriers to education, but are all categorised as white, British FSM (free school meals) in the national data.
In your school, do you talk about the problem with boys, or the problem with our boys? I would argue for the latter. The local challenges that subgroups of your students face would never be applicable to a nationwide study, and the subgroups may not contain enough students for a local RCT to be appropriate.
Context is king
Second, a narrow definition of research increases the gap between the people carrying it out and those working in the classroom. I would argue that the people best placed to measure how and why something works are practitioners. They are immersed daily in context-rich data and can generate descriptions with an ease that visiting academics can only dream of.
Above any other framework, action research places practitioners at the centre of the research process. The recognition of a challenge leads to the planning of a change, the enacting of this change and the noting of its consequences. The data produced is then reflected upon before the iterative cycle of action research begins again.
I would argue that this reflection must be undertaken in the classroom by practitioners. If not, then the richest source of data on the how and why is muted and the hypothesis-generation process is in danger of becoming guesswork from outside the profession.
Third, it helps when research is carried out by and for the profession. Papers can be written in a way that’s relevant to practitioners. Although context-rich and culturally sensitive research findings are not wholly transferable from institution to institution, they do serve as a starting point for schools to engage in research themselves. A research-informed profession could become the norm.
A question of respect
I would call on teachers to respect the RCTs that very effectively ask the questions appropriate to that methodological framework, but would ask them not to assume that this is research in its purest form – or that all else is useless. They undertake research, of sorts, every day and should learn to value it; an action-research framework can help them to do just that. It empowers a profession to refine its craft and enthuses individual teachers by helping them to use their complex professional judgement on something beyond exam results.
I would also ask the media, certain academics and others closer to government to accept the value of all research that does not make claims beyond its data.
The spotlight needs shifting on to a little-reported passage from Dr Goldacre’s paper: “…sometimes people think that trials can answer everything, or that they are the only form of evidence. This isn’t true, and different methods are useful for answering different questions. Randomised trials are very good at showing that something works; they’re not always so helpful for understanding why it worked…‘Qualitative’ research – such as asking people open questions about their experiences – can help to give a better understanding of how and why things worked, or failed, on the ground. This kind of research can also be useful for generating new questions about what works best, to be answered with trials.”
We must all embrace this broad definition of research that applies different methodological frameworks to different situations, to answer – and generate – different questions.
Tom Welch is a research consultant at the SSAT network