Who questions the questioners?;Briefing;Research Focus

9th October 1998, 1:00am

Share

Who questions the questioners?;Briefing;Research Focus

https://www.tes.com/magazine/archive/who-questions-questionersbriefingresearch-focus
Educational research has recently beencriticised in two high-profile reports. But here, two leading academics explain why they think the reports were at least as flawed as much of the research they castigate

The two critical reports that have appeared on educational research in the past couple of months have culminated in a Government commitment to “reform” it.

James Tooley’s recent survey of articles in educational journals has been generally criticised for exhibiting many of the methodological weaknesses it ascribes to others. But I believe that the second report, which the Department for Education and Employment commissioned from the Institute of Employment Studies at the University of Sussex, is also of questionable quality.

It presents evidence about the “research agenda”, the “research process”, and the “dissemination” of educational research. This evidence consists almost entirely of oral or written evidence from funders, policy-makers, teachers, educational researchers, and others. On this basis, the authors conclude that much research is methodologically problematic and has had insufficient impact on practice.

The report’s conclusions cannot, however, be sensibly derived from the evidence. First, given its title - Excellence in Research on Schools - what is required is a standard by which to judge excellence. However, as the authors note, this research covers a wide range of subject matter, and involves different disciplines, “and therefore can be difficult to define”.

They state that their report was underpinned by a definition which “identified educational research as that which ‘critically informs education judgments and decisions in order to improve educational action’”. Yet this definition is problematic because it implies that research which is not taken up by policy-makers or practitioners is not educational research.

Moreover, “inform” is open to many interpretations, a point made by the originator of the definition, Professor Michael Bassey: it could include the most indirect and diffuse influence.

The authors also recognise that educational research can vary in goal and intended audience.

(After all, what is being discussed is not research funded by the DFEE’s research budget but that deriving from funds allocated through the Higher Education Funding Council for England for independent academic research.) So, no clear conception of educational research underpins this study. Equally important, the authors seem to draw on only a tiny proportion of the massive literature available on the relationship between research and practice, despite criticising educational research for failing to build on previous work.

Had they looked at Nisbet and Broadfoot’s excellent review, published in 1980, they would have found that virtually everything they and their informants say has been said before many times, and that some of the proposed remedies have been tried.

Information about past experience would surely have been highly relevant. And so too would have been direct evidence about the quality and impact of current research. But this is not offered.

Unlike Tooley’s report, which examined a small sample of research reports, the authors apparently reach their conclusions about the quality of research almost entirely on the basis of their informants’ opinions. The same goes for their conclusions about the impact of research.

We might wonder how well-informed each informant was. I was one of the respondents, and my answers were based on a knowledge of the literature that is far from comprehensive.

We might also ask how the authors allowed for the biases in informants’ judgments; and for the influence of their own position as members of an applied research unit.

Furthermore, if, as the authors say, the influence of research is usually indirect, is it likely that anyone could come to sound conclusions about its nature and extent without carrying out detailed investigations?

These serious weaknesses in the report raise interesting questions about the central concept on which it relies: that of evidence-based practice. Given their commitment to this, in criticising research for being an inadequate basis for policy-making, and by putting forward policy recommendations, the authors automatically imply that their own report exemplifies the shape which research ought to take in future. Yet, as I have shown, their evidence is weak, and presumably therefore should not be used as the basis for policy.

That it will be so used also tells us something important about the nature of policy-making that is neglected by advocates of evidence-based practice: that, even when it claims to be, it is rarely based primarily on judgments about evidence. The implication is that if research is to have an impact on policy-making, it will probably have to supply the evidence necessary to support policies which have been largely determined on other grounds.

Martyn Hammersley Martyn Hammersley is professor of educational and social research at the Open University

Want to keep reading for free?

Register with Tes and you can read two free articles every month plus you'll have access to our range of award-winning newsletters.

Keep reading for just £1 per month

You've reached your limit of free articles this month. Subscribe for £1 per month for three months and get:

  • Unlimited access to all Tes magazine content
  • Exclusive subscriber-only stories
  • Award-winning email newsletters
Nothing found
Recent
Most read
Most shared