Flawed sampling falls down

In a preface to the recent OFSTED survey of educational research journals the chief inspector of schools, Chris Woodhead, says "much that is published is, on this analysis, at best no more than an irrelevance and a distraction".

As this remark implies, authors Professor James Tooley and Doug Darby were critical of the material investigated. They invite others to carry the debate forward: I am glad to accept their invitation.

While I am critical, there are some important respects in which the survey does advance debate. First, it sets out its aims clearly - to evaluate the quality of a sample of articles in four British education journals. The judgmental criteria are set out in detail and the data used to form judgments are clearly identified. The report is also careful to state various caveats about the "modesty" of its contribution, and particularly that evaluating people's reports of their research may not be a good guide to its quality.

It may be helpful if I reveal some personal views about research quality. From my experience of refereeing papers, grant applications and as a journal editor, I do perceive a widespread lack of understandings, including basic concepts of sampling and logical inference. This is not confined to educational research, however, and it permeates general public and political debates.

To return to the OFSTED survey, which concludes that only a third of the papers reviewed satisfy "good practice": a key issue is the decision to study just four journals using two evaluators. This raises several problems.

First, much educational research appears in mainstream psychology, sociology, and statistical journals. It may even be the case that the "best" published research appears in such journals. Second, UK research gets published in international journals, but this was excluded from the survey. Again, it may be that the best research goes into such journals. Thirdly, a better design would have been to select more journals and fewer articles from each, although with the resources available for the exercise any sample would be very limited.

The authors are critical of the peer-review process. In effect they are claiming that in most cases their judgments are superior to those of the journal referees. This is indeed a strong claim which will take some time to evaluate.

The authors state that the journals reviewed "represent an important strand of academic educational research". However, if we wish to assess educational research we should be sampling the researchers, research teams and institutions carrying out that research. The sampling frame would include individual academics, research teams, survey organisations, and institutions such as OFSTED.

From such sources we can study the activities of the various participants, what kinds of research they engage in, how much it costs, who funds it, how it is communicated, as well as an evaluation of its qualities. Such an approach would allow comparisons, for example between research supported by various funding sources. Furthermore, whatever conclusions we reach about educational research need to be evaluated against the situation in other fields such as medicine or economics.

It is also essential to examine the influence of external factors. The authors suggest how the universities' Research Assessment Exercise may have had deleterious effects on research quality. A more complete analysis would also study the requirements of funding bodies, media attitudes and overt and covert political pressures. Nevertheless,for all its flaws, the OFSTED survey has raised some useful issues.

Harvey Goldstein Harvey Goldstein is professor of statistical methods at the Institute of Education, University of London. A fuller version of this paper is available at www.ioe.ac.ukhgoldstn

Log in or register for FREE to continue reading.

It only takes a moment and you'll get access to more news, plus courses, jobs and teaching resources tailored to you