Let’s get real about education research

Education often looks to the field of medicine for guidance on becoming an evidence-led profession. So why are we sticking to old-fashioned approaches? Here, Catherine Brentnall argues that realist evaluation, an alternative to randomised controlled trials, could be a better way to build solid research foundations in the world of education
16th August 2019, 12:03am
Operating Theatre

Share

Let’s get real about education research

https://www.tes.com/magazine/archived/lets-get-real-about-education-research

Educators have long been told that they should embrace the research approaches of the medical profession, where experimental studies and systematic reviews are seen as the route to producing rigorous evidence on what works.

We are also familiar, through the work of John Hattie, with the idea that successful educational interventions can be identified through meta-analysis: a statistical technique that involves analysing, combining and summarising previous studies, and quantifying differences by effect size.

We have a charity, the Education Endowment Foundation, set up to use and champion randomised controlled trials (RCTs) to evaluate educational interventions. A 2013 Department for Education paper about building evidence into education argued that conducting RCTs had encouraged advances in medicine and that they could do the same for education.

But health services research has moved on, developing alternative and complementary approaches that no longer position RCTs and meta-analysis as the only, or best, way to build the knowledge base. One such evidence-based approach is known as realist evaluation. Could this be a better fit for education?

 

A new way

Realist evaluation is a response to frustration surrounding the inconclusive and contradictory knowledge generated by pre-test/treatment/post-test evaluation. It has its roots in the idea of theory-driven evaluation research, which was introduced by Carol Weiss in 1972 and developed further by Chen and Rossi (1983). These authors argued that even if an experiment demonstrates that a programme has been successful, evaluators have learned nothing about why it has been successful. They suggested a new direction for the field of evaluation: to learn what it is about a programme that makes it work.

Building on that challenge, researchers Ray Pawson and Nick Tilley - early proponents of realist evaluation in the UK - developed the approach to better understand how and why criminal justice interventions worked (or didn’t). Their 1997 book Realistic Evaluation paved the way for other disciplines to adopt the same approach.

As a result of its position as an alternative or complement to experimental evaluation, realist evaluation has been adopted and adapted in health services research, with the British Medical Journal publishing standards and guidance for realist review and realist synthesis (Greenhalgh et al, 2015).

All evaluation approaches are underpinned by different research philosophies. RCTs, for example, are underpinned by positivist philosophical assumptions, which consider that which can be observed and measured as a valid source of knowledge.

Thus, in an RCT the act of matching identical participants, and allocating them randomly to two groups, means that any change has been caused by the only observable difference: the programme.

Realist evaluation has a different philosophical underpinning. It is based on the philosophy of scientific realism, which views any changes that happen in a programme as being caused by unseen processes within the participant. Thus, the programme is not the causal agent, but rather change is the result of the reactions and reasoning of participants.

This is a significant and fundamental difference. As Pawson explains in his seminal work Evidence-Based Policy: a realist perspective, in scientific trials the experimental method aims to test whether “the treatment” is effective, and researchers go to great lengths to try and decontaminate investigations from human influence by introducing random allocation, placebos and double blinding (Pawson, 2006).

For the realist, this is not only impossible but also misses the point; policy delivered through human intervention only works through stakeholders’ reasoning. So trying to wash out bias purges the very features that explain what is happening - the way in which participants think, and change their thinking, under a programme (Pawson, 2006).

As a result, realist evaluation has theory (not measurement) as its start and end point. It posits that a programme - such as an educational intervention - is a set of ideas which, when packaged together, are assumed to lead to positive change for participants.

For example, in my field, enterprise education, schools in England and across Europe are recommended to provide competitions and challenges to young people. These programmes are assumed to generate benefits such as new skills and knowledge, confidence and motivation.

Until now, evaluations have tried to measure such constructs to demonstrate the impact that a programme has made. A realist approach has a different purpose: its core task is making the assumptions that underlie a programme explicit, and evaluating how and why these assumptions are realised (or not).

In relation to my own interests, this could mean exploring for whom, and why, competitive learning will work (or not), or investigating how and why extrinsic rewards generate motivation (or not). I purposefully include “or not” because one of the most liberating facets of realist evaluation is that evaluators expect a range of outcomes, both positive and negative, because the approach recognises that any programme is introduced into existing settings, and interacts with existing policies, priorities, procedures, attitudes and beliefs.

These contextual features affect how programmes are implemented, which, in turn, influences how people respond. The scientific element of the approach is related to its stance that science should be concerned with theory-building, testing and refining, not only observing and measuring.

 

Better explanations

Applying a theory-driven approach has enabled me and my colleagues to offer better explanations about such mixed results and point to factors that are likely to influence outcomes positively and negatively in competitively structured enterprise activities.

Our approach involved breaking down such programmes into constituent parts and piecing together the assumptions that underpin the organisation of activities in this particular, competitive way. For example, it is assumed that organising students into competing teams, setting an inspiring challenge, providing teacher and/or business mentor support and creating an exciting compete-and-pitch finale will develop students’ skills and knowledge and be motivating and rewarding.

However, theory from education and psychology challenges these assumptions. Such competitive processes may be experienced as coercive, may diminish motivation and may decrease entrepreneurial intention (Brentnall, Diego Rodríguez and Culkin, 2018).

We also found that outcomes in enterprise education were likely to be influenced by whether or not a student was competitively inclined, whether they volunteered, whether they won or lost, and, crucially, how well resourced they were, both in terms of technical ability and interpersonal skills (Brentnall, Diego Rodríguez and Culkin, 2018).

Realist evaluation evolved to cope with the issues inherent in evaluating complex health programmes. Take an obesity intervention for children. Such a programme will have many moving parts and operate in multi-level communities; for example, health workers trying to engage with relevant beneficiaries (overweight children) but also hoping to engage families and potentially impact on the culture and inclinations of a community or place. Such a programme will be delivered by many different people with different motivations, interests and levels of autonomy, to many different participants with equivalently varied motivations, mindsets and resources to take advantage of the opportunity (or not).

The programme will also exist within a particular geographical, social and political context and history, so the realist approach would aim to recognise what interventions preceded the current involvement, and what infrastructure and support exists (or doesn’t), which will vary from place to place (Greenhalgh et al, 2015).

Education interventions share these complexities, and therefore evaluators wanting to know why something happened (or didn’t) could benefit from utilising or adapting the realist evaluation approach.

In extending the evaluation question from “what works?” to “what works for whom, in what circumstances and why?”, realist evaluation can lead to a richer and more detailed explanation of a programme than is achieved through experimental study.

In enterprise education, experimental research has tended to wash out the social context of activities but our research was able to illuminate this as particularly important.

We identified a side effect of competitions as enabling confident, socially and culturally advantaged young people to gain additional social and educational capital that would benefit them further down the line, therefore creating further disadvantage for their less well-equipped peers. Such an analysis extends the thinking of educators and policymakers towards understanding that blanket policy recommendations will not achieve their goals unless programmes are well targeted or refined for different groups.

But realist evaluation is not going to deliver flashy headlines; complex interventions mean complex results. Realists don’t deliver definitive judgements about what works (because any intervention both will and won’t work, depending on all the interacting contextual factors).

Some argue, meanwhile, that the approach is too focused on technical improvement and that this sidelines the political, moral or social issues related to interventions. This is a familiar argument in education.

Gert Biesta wrote about “Why ‘what works’ won’t work”, arguing that evidence-based policymaking should not happen in a vacuum. Realist approaches are still a means to an end. Biesta’s thesis is that in education, the ends should be subject to continuous democratic deliberation and contestation. But questions that focus the mind on such topics as “whose policy?”, “whose interests are served?” and “who gets to decide?” are not in the scope of realist evaluation.

 

Rethinking rigour

Realist evaluation can offer an alternative approach to what counts as rigorous research. RCTs are frequently presented as the gold standard of research, and evidence-based medicine is often invoked as the model to which educators and researchers should aspire. Recognising that evidence-based medicine has moved on is important in terms of opening the door to ways of thinking that suit the complexity of the education context.

In realist evaluation, there is an acceptance that all knowledge is incomplete and fallible. Ideas about standardisation and reproducibility (as they are presented in experimental approaches) are not applicable. Validity in realist evaluation instead rests on refutation; cases that contradict recommendations should be embraced for the opportunity they represent to build the knowledge base and learn why something hasn’t worked.

Finally, the approach may provide an antidote to overconfident and authoritative claims sometimes made about a programme’s effectiveness (or lack of it): it recognises that all programmes will generate a range of outcome patterns according to participants and contexts.

Pawson (2006) advises that there must always be caution and diffidence in deriving any conclusions and policy recommendations: “If evidence-based policy were human, he would be a modest soul, with much to be modest about.”

Recognising such insight might encourage us to think in more expansive ways about what constitutes rigorous research, and adopt a more pluralistic view of what can be learned from evidence-based medicine.

Catherine Brentnall is a doctoral researcher and enterprise educator

This article originally appeared in the 16 August 2019 issue under the headline “Let’s get real about education research”

You need a Tes subscription to read this article

Subscribe now to read this article and get other subscriber-only content:

  • Unlimited access to all Tes magazine content
  • Exclusive subscriber-only stories
  • Award-winning email newsletters

Already a subscriber? Log in

You need a subscription to read this article

Subscribe now to read this article and get other subscriber-only content, including:

  • Unlimited access to all Tes magazine content
  • Exclusive subscriber-only stories
  • Award-winning email newsletters
Recent
Most read
Most shared