5 things you need to know about research

In the rush to become an evidence-informed teacher, it’s easy to overlook the fact that not all academic exploration is equal – and some of it is better, or worse, than it initially appears. Christian Bokhove sets out ways to avoid being suckered by a persuasive consensus, guard against injudiciously dismissing oppositional findings, and recognise the limitations of educational research
12th October 2018, 12:00am
Magazine Article Image

Share

5 things you need to know about research

https://www.tes.com/magazine/archived/5-things-you-need-know-about-research

If you were to stop and observe teachers doing the Grand Tour of educational research that now seems a compulsory journey for all school educators, you would notice something concerning: where teachers choose to stop and where they hurry past, throwing a dismissive glance as they go, is highly variable.

For some, the shores of Sweller, Bjork and Willingham are a must-see; for others, not so much. Certain teachers walk among the more historical streets of a Piaget, Vygotsky or Bruner, others would sooner demolish the lot. There are groups who flock to neuroscience alone; others who single out cognitive psychology; and some who make a pilgrimage to the large body of work on mindfulness, wellbeing and motivation.

A further group looks on disparagingly at it all - they’d rather stay at home, thanks.

Rarely do the groups coexist peacefully. Tooled with studies new and old, they do battle on the open squares of social media or in the taverns of TeachMeets. Flaws in opposing research are publicly deconstructed; faults in the studies they follow themselves are ignored, hidden, explained away.

It’s not a productive situation for anyone. The social sciences and educational research, in particular, cannot be consumed like a cruise-ship holiday, where you select the best route you know you will already like and ignore the rest.

And it cannot be consumed in a way that ignores some of the important limitations upon it, but nor should those limitations be hidden away or highlighted as fatal flaws, depending on our ultimate objective.

In short, if education really is to become evidence-informed, it should have a more mature relationship with the research most connected to the profession. And that approach, in my view, can be supported by doing the following five things.

1. Acknowledge that educational research is messy, but that this does not negate its importance

Educational research has a poor reputation in some quarters; for those hoping to signal their superior knowledge of the research world, it is often used as a punching bag. It is also regularly called out for being too woolly: “Where are our facts!”

But while issues sometimes arise with this field, a blanket condemnation is unfair.

David F Labaree, professor emeritus at the Stanford Graduate School of Education, did a useful piece of work on the type of knowledge that educational research produces. He found that two characteristics in particular make it difficult for researchers in - what he calls - soft-knowledge fields to establish durable and cumulative causal claims (essentially, claims that are a solid bet to be right). Firstly, educational research concerns aspects of human behaviour, and humans have a tendency to be unpredictable. Secondly, he argues that embedded values and purposes of the actors under study create a messy interaction between the researcher and the research subject; what the teacher thinks of an intervention, for example, can influence how well that intervention works.

According to Labaree, these factors have several negative consequences for educational research: a lower status in academia; a weak authority within education and educational policymaking; pressure to transform education into a hard science; pressure to transform education schools into pure research institutions; and a sense that the field is never getting anywhere - we say we know something and then a few years later we don’t know something.

But the nature of educational research’s “soft knowledge” also has positive consequences: producing useful knowledge is not a bad thing; it is created relatively free from consumer pressures; there is freedom from disciplinary boundaries and hierarchical constraints; and it has an ability to speak to a general audience. For example, in what other discipline could we so easily and freely discuss a theme such as “setting” from perspectives such as pedagogy, sociology, psychology, economics and our own anecdotal experiences?

If teaching really is to become evidence-informed, rather than focus on making social science “hard” or criticising all educational research as being inferior, perhaps we should embrace soft knowledge. We should recognise the problems and stay critical, but not throw away the baby with the bath water. There is much that is useful coming out of this field that we are in danger of missing.

2. Be more mindful of the complexity around cause and effect

In any conversation about research in education - actually, in any conversation about education in general - there is almost always a moment when someone will say the item under debate suggests correlation, not causation. This links back to a centuries-old debate between philosophers David Hume and Immanuel Kant on whether everything is experience or whether we can know more than our five senses tell us. In contemporary debates it is often noted to end the debate - ie, you cannot prove one causes the other so any connection is invalid. Such a stance is problematic. Of course, challenging research by stating that correlation is not causation is important. But, in my view, this does not mean that research should never make any inferences.

With students, I always give the example of the high correlation between outdoor temperature and ice-cream sales. A high correlation between those two variables makes sense: with a higher temperature, there are more ice-cream sales. But it also makes sense to assume that the higher temperature “caused” higher ice cream sales, not the other way around.

Admittedly, in the “real” world, it is seldom that clear-cut. So we need to be considered with the inferences we make: with most social science research, we have to be careful about the chicken and the egg.

But in some cases, teachers can be too eager to point out an inference as being flawed rather than seeking out the reason why that inference was made and discussing whether it is useful. Admitting that the cause is a leap of faith should not negate discussion of the usefulness of that leap.

And in some cases, teachers can be too keen to cite a cause when the situation is much more complicated than has been presented. For example, recently I have seen several people cite ED Hirsch’s work to argue the negative consequences of a child-centred curriculum. That research relies on decreasing scores on a standardised test. These scores are from the same period in which education reforms were taking place. To my mind, ascribing effects to one particular cause in this case is highly problematic.

It’s a balancing act, and it is one at which we need to get better.

3. Remember that science is incremental

Some people will argue that the hallmark of the “scientific method” is philosopher Karl Popper’s concept of falsification: it should be possible to “prove” a hypothesis wrong. If there was no conceivable way of testing the hypothesis, then it is pseudoscience. He also suggested that evidence that conflicts with the hypothesis must lead to its rejection.

But another important name in the philosophy of science is Thomas Kuhn. His view, in contrast to Popper, is that “wrong” results, those in conflict with the prevailing paradigm, do not necessarily damage the consensus view, but are due to errors. After a while, with increasing conflicting evidence, a crisis point is reached and a new consensus view emerges. He called this change in consensus view a “paradigm shift”.

You can see these two views playing out in research debates in education. People will cite a particular study that conflicts with a consensus view with big statements like “X turns out to be wrong”. It has been done with topics such as achievement leading to motivation, rather than the other way round, and whether knowledge is, and only is, domain-specific (the nearly consensual view is that knowledge imparted during instruction includes some mixture of domain-general and domain-specific information).

But the reality of science is much more in keeping with Kuhn’s view. No single study necessarily negates a prevailing view. Research is more incremental: every study adds to the total body of knowledge about a topic. Over time, the prevailing consensus can change, as has happened with some topics in the past decade, such as the cognitive revolution in social sciences, which moved away from behaviourism to an acceptance of cognition as being central to the study of human behaviour. However, such a shift demands a weight of evidence, not a single study.

We need to be mindful of this when studies are published that seem to go against the consensus view. The tricky thing here is the balance between a consensus view that does not allow contrasting outcomes, and throwing out a whole theory on the basis of one study.

4. Understand the role of context

Educational psychologist David Berliner (2002) argued that educational research is “the hardest science of them all”. He comes to this evaluation because social scientists operate under conditions that physical scientists find intolerable, as there are particular problems and they must deal with local conditions that limit generalisations and theory building. Hence, when reading research, context is crucial so that these issues can be considered.

As an example of the power of context, Berliner cites Project Follow Through. This is a study that comes up again and again in teaching discussions owing to the fact its findings match the aims of the current government in terms of pedagogy. It is one of the largest educational projects ever conducted: a dozen philosophically different instructional models of early childhood education were implemented in multiple sites in the US over a considerable period of time (from the 1960s onwards). The design used “planned variation”, or a “horse-race design”, in which sponsors, together with the local community, would start an instance of the intervention.

Normally, in scientific experiments, treatment and control groups are selected through random assignment. In “planned variation”, districts themselves chose the interventions they wanted to be implemented in their schools. Teaching models were evaluated for their effects on basic skills, cognitive conceptual skills and affective development. Generally, the consensus among most researchers is that structured models tended to perform better than unstructured ones, and that the direct instruction and behaviour-analysis models performed better on the instruments employed than did the other models.

However, a complicating factor was that the variance within programmes was larger than between programmes. Each local context was different: personnel, teaching methods, budgets, leadership, and kinds of community support. So the difference in outcomes between two direct-instruction models was actually bigger than that between direct instruction and other models.

Does this mean that the research conclusions are faulty? Not at all. But it does mean that both sides - those advocating the study to suggest they are right and those pointing out the flaws in the study - need to openly discuss the issues. Context is very hard to control so it should be part of the discussion, not dominating it or ignored altogether.

Context matters, but it should in my opinion not be an excuse to not at least try to find context-independent “truths”.

5. Know that there is no ‘right’ way to measure

One aspect that is explored further by Labaree (2011) concerns the historical and sociological elements that have made educational researchers dependent on statistics. This was mainly a mechanism to shore up their credibility, enhance their standing and increase their influence in the realm of educational policy.

Of course, there is value in the effort to make educational knowledge more quantifiable and generalisable, as it is not helpful to continually have to say “it all depends”, but it also leads to two problems.

One problem is that adopting these practices will destroy the local practical knowledge that makes the individual classroom function effectively. As Labaree says, we are forcing a rectangular grid onto a spherical world. Measurement can be useful, but to judge its usefulness, we need to know what has been measured. For example, socioeconomic status is often not measured directly, but via parental education and books at home; growth mindset and cognitive load are often measured using self-report.

My point is not that this invalidates numbers, but that it matters what is actually being measured.

A second problem Labaree noted is that this approach to educational questions deflects attention away from issues in the field that can’t easily be reduced to numbers. There is a focus on what researchers can measure, rather than on what is important. On this point, we need a better understanding of the strengths and limitations of quantitative and qualitative data. Rather than dismiss methods, intentionally or not, it would be best to make sure that “the question drives the methods, not the other way around” (Feuer et al, 2002). Depending on the question, one or more methods can be adopted. No research design implies either qualitative or quantitative data (Gorard, 2004). I think this can lead to mutual understanding between researchers and practitioners, and between disciplines, and less judgement.

It has become popular to criticise educational research for a whole raft of challenges that are faced in the field. In doing so, it is important to realise that the knowledge produced in educational research is different: it will always be difficult for it to be as “hard” as the natural sciences. We should accept that research with humans is “soft” and “difficult”.

Some historical acknowledgement of the issues mentioned here could go a long way in understanding what’s going on. We should not hide from the problems of educational research (hopefully, replication studies and preregistration can confront “publication bias” head-on), but neither should we write off the whole lot because of those problems.

If teachers are to be more research-informed and for that information to be useful and applicable, we all need to work together, academics and teachers, to work incrementally, as science should, to build up a clearer and clearer picture of what works, who it works for and under which conditions. These five things can contribute to a more mutual understanding.

Christian Bokhove is a lecturer in mathematics education at the University of Southampton

References

Berliner, D.C. (2002). Educational research: The hardest science of all. Educational Researcher, 31(8), 18-20.

Feuer, M.J, Towne, L., & Shavelson, R.J. (2002). Educational Researcher, 31(8), 28-29.

Gorard, S. (2004). Sceptical or clerical? Theory as a barrier to the combination of research methods. Journal of Educational Enquiry, 5(1), 1-21.

Labaree, D.F. (1998). Educational researchers: Living with a lesser form of knowledge. Educational Researcher, 27(8), 4-12.

Labaree, D.F. (2011). The lure of statistics for educational researchers. Educational Theory, 61(6), 621-632.

You need a Tes subscription to read this article

Subscribe now to read this article and get other subscriber-only content:

  • Unlimited access to all Tes magazine content
  • Exclusive subscriber-only stories
  • Award-winning email newsletters

Already a subscriber? Log in

You need a subscription to read this article

Subscribe now to read this article and get other subscriber-only content, including:

  • Unlimited access to all Tes magazine content
  • Exclusive subscriber-only stories
  • Award-winning email newsletters
Recent
Most read
Most shared