Why searching for ‘what works’ is a wild goose chase

It can be tempting for educators to uncritically accept the ‘objective’ answers offered by psychological research. But quick fixes are doomed to failure in the complex context of real-life schools, writes Nick Dennis
7th September 2018, 12:00am
Magazine Article Image

Share

Why searching for ‘what works’ is a wild goose chase

https://www.tes.com/magazine/archived/why-searching-what-works-wild-goose-chase

I can’t recall exactly when I first came across Carol Dweck’s “growth mindset” research, but its simplicity, and its promise, piqued my interest immediately. The theory suggests that people perform appreciably better when solving problems if they believe that intelligence is not a fixed quality.

Suspending my disbelief that simply showing someone that intelligence wasn’t fixed could lead to improved outcomes, I cast around for validation. Respected headteachers, well-known bloggers and even Tes authors had written pieces about this exciting new idea.

Although not entirely sure how it would work in reality, I put together resources based on what other schools had used and held a session with a select group of students. I remember feeling slightly disappointed when one of my sharpest students rejected the idea outright. As I tracked students’ academic performance over the term, the ones who were initially energised did not seem to make the progress the research suggested they would.

I thought this was down to my ineptitude and decided to do more homework on Dweck’s research. What I found was that I had barely scratched the surface of the subject of student motivation and had fallen into a trap that persists in education: believing that the presence of statistical evidence in published work supposedly validated by peers means that an intervention should work.

As I looked into other published studies from the discipline of psychology, I became more and more aware of the growing challenge to Dweck’s work within her academic field. Researchers seeking to replicate Dweck’s experiments could not achieve the same outcomes and there were questions about the representation of data in the studies.

That I was misapplying the knowledge was a distinct possibility, but what if the research was not generally applicable and worked only for distinct groups, such as those with low socioeconomic status? Or what if, rather than highlighting ineptitude or the need for further hypothesis testing, it was, as Dr Atul Gawande argued in his Reith lecture series in 2014, an example of necessary fallibility: the inability of science to answer certain types of question?

While Dweck has acknowledged some issues with the research, and the effectiveness of growth mindset theory is still under scrutiny, why did the idea gain such traction in my mind and the minds of teachers across the country? I suspect it had something to do with how we view psychological research in education. Often we see it as providing objectivity and universally applicable truths in contrast with existing educational research, which is generally more qualitative and not as easily applied. My view is that we need to be more open, reflective and cautious in how we consider, and attempt to adapt, research findings to our schools.

Our predisposition for seeking objective truths is formed by the structures and cultures we inhabit as educators. Teachers have an internal moral guidance system that means they want students to do well, but there are also structures that quantify, account and judge our success. Exam league tables, inspection ratings, lesson observation grades and performance management reviews all put pressure on us to meet, or exceed, targets.

In the context of the societal goal of reducing inequality and enabling young people to be productive citizens who can live fulfilled lives beyond our school gates, the pressure to act correctly, and at scale, across a range of classes, a department or even a whole school, is immense. Our aspiration, and desperation, to address these problems leads us to crave definitive and universal answers, and research based on psychology, such as Dweck’s work or Building Learning Power or High Performance Learning, seems to fulfil our need.

The hunger for insight

The explosion in the number of teachers writing about using the latest psychological research to improve education, along with conference talks on the subject, testifies to this hunger, and the potential power. After all, precisely this type of research has brought new insight into the learning process. It has proved useful in challenging some fairly ridiculous practices - for example, through asking for studies supporting the notion that learning can be visible every 20 minutes or, indeed, can be measured through graded lesson observations. Dialogue has opened up between teachers, government and the inspectorate, and there is optimism that things are changing for the better because of this research.

Yet something has been lost in translation. Whereas the scientific method demands scepticism, imagination and a view that the knowledge gained is provisional, incomplete and, most importantly, incremental, the disinterment of such research from its original academic context and its reappearance in the classroom creates a propensity to solidify and codify where there was once tentativeness, caution and incomplete findings. Scientific evidence becomes transmuted into educational truth: it is universally applicable and serves as the definitive basis for a call to action.

The problem with this is that schools are not controlled environments in the same way the experimental conditions are. Schools are made up of “free-range” human beings who have their own experiences, agendas and desires. Simply mapping emerging psychological research on the mind, brain and learning in this context suggests we believe the published research is omniscient and has perfect knowledge of all situations. This may seem ridiculous, but the language we use betrays our beliefs, as is clear from a phrase associated with psychological research in education: we are all working out “what works”.

Fortunately, there is a more nuanced approach within the scientific community. In 2005, John Ioannidis, a professor of medicine in the US, released a paper entitled “Why most published research findings are false”. It questions the design, methods and statistical analysis of research findings across various scientific fields, and has led to more than a little introspection.

Within psychology, in 2015 a group of researchers decided to look once again at 100 published papers and seek to replicate the experiments. They found that only 36 per cent of the studies examined had significant results, in comparison with the originally claimed 95 per cent. But talk of a “replication crisis” within the discipline has not appeared to dampen our enthusiasm in education to transplant ideas uncritically.

Notions such as introducing “desirable difficulties” into learning to improve retention of key information later on, or Cognitive Load Theory (which argues that presenting information in particular ways can overwhelm the limited capacity of the mind to process information usefully) are growing in popularity. And while they are useful avenues to explore and consider within the context of education, what is often missing is an associated discussion regarding the intense scrutiny, revision and challenge these ideas are subject to within their own discipline and, consequently, what this means for any application.

Beyond these internal debates, we seem to look past obvious inconsistencies. Many of the published research studies are based on US college students, who are, unsurprisingly, very different from UK school populations, which are younger and more diverse in academic learning needs and achievement, and also live in a different culture altogether.

While we are often loath to accept that the school down the road may have something to tell us about our own work, we seem to suspend our situational criticality when confronted by research using the scientific method. This is a troubling path because science alone cannot address our aims. We need only look at a profession that has been dealing with free-range human beings, research and improving the delivery of its rationale for a lot longer than we have: medicine.

The advent of evidence-based medicine (EBM) in the early 1990s helped to revolutionise healthcare but created problems. EBM’s goal of generating independent, truthful and objective facts that would work in all clinical settings has proved insuperable.

In particular, medics were concerned about how pieces of research would be promoted by parties with vested interests. Another issue was that EBM created a form of bureaucratic decision-making, especially for newly trained clinicians who knew no other form of medical care, leading to the risk that evidence-based guidelines drawn from research would be applied without a real understanding of the individual patient. Finally, there was the issue that while research could suggest that an intervention was significant, it might not work at all in a clinical setting.

Alongside the growth of EBM, medics sought to place research as only one element in a holistic “quality improvement” framework for care. Paul Batalden and Frank Davidoff have defined this approach as the “combined and unceasing efforts of everyone … to make the changes that will lead to better patient outcomes (health), better system performance (care) and better professional development (learning)”, suggesting that research based on the scientific method is necessary but not sufficient to improve outcomes. As well as replicated, robust, usable and multidisciplinary synthesised research, Batalden and Davidoff argue there needs to be a deep awareness of the physical, social and cultural identity of the institution in question, and that there should be multidimensional measurements of any intervention in conjunction with a capacity to plan and manage change. All of these things should work in concert, otherwise the provision of care will be diminished.

Education is not medicine, but the experience of the medical profession should provide a salutary lesson around a certain naivety we have regarding the use of research to inform our work in schools and with students. Our current approach to using psychological research diminishes our capacity to really effect the changes we want to see. While such research it is undoubtedly useful, there are some things it cannot answer when placed within a classroom or school setting.

Education is an ever-unfinished conversation, and no matter what system or new form of knowledge we try to implement, there will also be unintended consequences. By understanding this, and appreciating that our current use of psychological research needs further development - and, importantly, that we are dealing with free-range human beings - we may have a chance to substantively improve the lives of young people under our care. But when someone offers research into “what works”, we just need to recall the song from the Gerswhins’ opera Porgy and Bess and tell ourselves: “It ain’t necessarily so.”


Nick Dennis is a senior leader in Hertfordshire. The fee for this article was donated to the Stuart Hall Foundation (stuarthallfoundation.org). He tweets @nickdennis

You need a Tes subscription to read this article

Subscribe now to read this article and get other subscriber-only content:

  • Unlimited access to all Tes magazine content
  • Exclusive subscriber-only stories
  • Award-winning email newsletters

Already a subscriber? Log in

You need a subscription to read this article

Subscribe now to read this article and get other subscriber-only content, including:

  • Unlimited access to all Tes magazine content
  • Exclusive subscriber-only stories
  • Award-winning email newsletters
Recent
Most read
Most shared