Is cognitive science a load of trouble?

After the rapid rise of cognitive science in recent years, you’d be hard-pushed to find a teacher who hasn’t yet bought into its evidence about the brain processes involved in learning. But now, a warning has been issued about educators applying ‘lethal mutations’ of the research in classrooms. So, could the cognitive science bandwagon be grinding to a halt, asks John Morgan
3rd September 2021, 12:05am
The Eef Has Issued A Warning About The Ways Teachers Are Applying Cognitive Science Research In The Classroom

Share

Is cognitive science a load of trouble?

https://www.tes.com/magazine/teaching-learning/general/cognitive-science-load-trouble

For an increasing number of teachers, understanding cognitive science research and applying methods informed by that research in the classroom has become an integral part of their practice.

Methods driven by thinking about how people learn and thus how they can be helped to learn - spaced learning, interleaving, retrieval practice, managing cognitive load, dual coding using verbal and non-verbal information - are now common in many schools.

In fact, in a recent Teacher Tapp survey, as many as 92 per cent of respondents had heard of working memory and long-term memory, 81 per cent had heard of cognitive load theory and 79 per cent had heard of retrieval practice.

And in a separate survey of teachers conducted by the Education Endowment Foundation (EEF), more than 85 per cent of respondents said that cognitive science strategies were central to their own approach to teaching.

“There’s been a large increase in interest in cognitive science; there’s been a large increase in the amount of practice that is informed or inspired by it,” says Thomas Perry, assistant professor in the University of Warwick’s Department of Education Studies. “Crucially, I think, a critical mass was reached about a year or two ago, where these things started to find their way into policy frameworks.”

Indeed, it’s not just teachers who have become devotees of the “cogsci” creed, but politicians, too. One notable example of this is the overview of research evidence behind the Ofsted inspection framework published in 2019, which said it was “important that we use approaches that help pupils to integrate new knowledge into the long-term memory”, adding that the “learning sciences” were “increasingly generating moderate to strong evidence of practices that can be used to enhance learning across phases and remits”.

Concerns over cognitive science in the classroom

Meanwhile, from this month, early career teachers will be given two years of training under the new Early Career Framework - supported and funded by the Department for Education - in which they will be taught about memory and how to avoid overloading working memory (essentially cognitive load theory).

It is therefore safe to say that cognitive science-informed approaches are now well established. But should they be?

An evidence review published recently by the EEF, a review led by Perry, raises some fundamental questions about the evidence base for applying those very approaches in the classroom.

What do those questions mean for how teachers teach? Could the EEF evidence review signal, perhaps, the end of the rise - or the end of the first era, at least - of cognitive science in the classroom?

In order to understand where cognitive science-based methods are going, it is useful to understand how they took off in the first place.

Harry Fletcher-Wood is an associate dean at training provider Ambition Institute, whose modules cover putting cognitive science research into classroom practice. He sees social media as an essential factor here.

“It felt like there was a vanguard - five, six, seven, eight years ago - which was Twitter led: people reading predominantly American research, particularly [the work of psychologist] Daniel Willingham, and advocating for that, then moving out through the ResearchEd conference and blogs and books,” he says.

“That has spread so it’s become a lot more mainstream … now lots and lots of school leaders have come across the same ideas.”

So, rather than official agencies setting the path on cognitive science methods, bodies like Ofsted have been “following the teachers” and the spread of these ideas on social media, points out Mark Enser, who is head of geography and research lead at Heathfield Community College in East Sussex, and a regular writer for Tes. This, he says, is a key reason why the ideas have taken off: the movement is peer led, so comes with the validation of being tried and tested by teachers.

Enser argues that another reason why these ideas have spread so quickly is that the results of the techniques can be “very immediate … you can see it working”. “You do some retrieval quizzes and the next lesson, [pupils] can remember more than they would have been able to,” he says.

This has been a refreshing change, Enser explains. So much educational research is “fascinating” but “esoteric … it doesn’t help me as a teacher to do my job in the classroom on a day-to-day basis - and this does,” he says.

Indeed, the basic principles underpinning much of the cognitive science research centre on a simple argument that most teachers can get behind: that working memory is limited, so there is a need not to overload pupils and to regularly revisit new information to check what they recall and to fill the holes in that recollection. In other words: forgetting is natural, and here are some ways around it that teachers can deploy. Whether some teachers needed cognitive science to tell them that, or they knew it already, is perhaps a moot point: the validation of academic research is surely welcome validation.

But have some important caveats around the original research been jettisoned in its rapid, somewhat freewheeling popularisation in teaching?

That’s what the EEF evidence review aimed to look at. And it was a serious piece of work, assessing a final database of 499 studies testing the application of cognitive science-informed approaches. Importantly, the review developed a tool to assess the studies’ “ecological validity” : whether a study was delivered by teachers in realistic, everyday classroom conditions, as distinct from scripted lessons, lessons delivered by researchers or classes that centred on “recreational games” or psychometric assessments rather than typical learning outcomes.

The main review team comprised researchers from the University of Birmingham (where Perry worked at the time), supported by an advisory panel of leading education scholars, cognitive psychologists and neuroscientists.

The evidence base this team reviewed was broad. Perry observes that “a lot of what we call cognitive science has come from basic science”, where there has been “a really strong evidence base built up over many decades, but a lot of it comes from laboratory studies, a lot of it comes from things that are fairly artificial, a lot of it comes from undergraduate students”.

When it comes to the application of cognitive science-informed approaches to teaching the curriculum, “a lot of the work on that front has been done essentially by very expert and creative teachers writing books”, he points out, so it was important to assess what evidence there was on the applied research.

The review emphasises that talk of “cognitive science” covers two distinct arms: cognitive psychology, concerned with mental processes; and cognitive neuroscience, concerned with the brain and the biological processes that underlie cognition.

In terms of headline messages for teachers from the review, Perry says that there are two.

“The first one is that we think cognitive science matters: we think teachers should know about it, it’s absolutely right that this is informing the Early Career Framework and so on, and teachers should have a working knowledge of the principles of cognitive science,” he says. It is “quite clear that the cognitive science principles are generally a good explanation for changes in learning, rates of learning - people are testing these, they are having the expected effects,” he adds.

However, the second, “slightly contradictory but also true message here is that the applied evidence base is far more limited than the basic science”, continues Perry. “It tends to be more complex, tends to be less positive.”

What this essentially means is that the ideas are theoretically plausible - in controlled conditions, there is evidence of validity. Where the review finds issues is that we don’t know how far these ideas prove true in the real world of teaching in a classroom.

On the matter of ecological validity, for instance, he points out that “even a lot of the studies that were in the classroom” were “teacher proofed” - scripted and delivered by researchers or computer programmes. And, he adds, there are “gaps in relation to which subjects have been studied” in the applied research, with “upper primary, lower secondary maths and science in particular … overrepresented in the evidence base”.

This bias towards certain subjects in the research is worth noting, because if the strength of cognitive science-led approaches appears to be in learning concrete facts, where might that leave a subject like English, in which interpretation and nuance are at the heart of things? And what about the early years? Many teachers of this stage prioritise “immersive learning environments”. What effect would a seemingly contrary approach of managing cognitive load have on the youngest children?

For Perry, these are examples of areas “where we have actually quite little applied evidence” and where “we should take care about universalising, generalising these principles”.

On the one hand, it’s positive that more teachers are reading educational research and spreading its key findings to a wider audience through social media. Looked at in another light, that decentralised dissemination of information could lead to classroom application that is unhelpfully detached from the original research.

The EEF review terms this the risk of “lethal mutations”, noting, for example, that some teachers “have reported that [the theory of] dual coding sometimes means that irrelevant illustrations are added to presentations, which may be a distraction rather than a way of developing schemas and optimising cognitive load”.

Megan Dixon, director of research at Holy Family Catholic Multi-Academy Trust and a Tes research columnist, agrees that teaching and learning can suffer as a result of such mutations. But she also sees another worrying trend connected to the adoption of supposed cognitive science approaches: an ideological shift about how we see the broader process of teaching and learning. This is the tendency “to believe that the students are empty vessels that need to be poured with knowledge to pass a test”.

Indeed, cognitive science-informed approaches can seem rather cold, viewing children as purely rational entities who will uniformly respond in the required way if the teacher applies their methods correctly - arguably failing to take account of differences in children’s development, of special educational needs and of the impact of things like hunger or lack of sleep on an individual’s capacity to learn.

Perry agrees that there’s definitely a deficiency in this area in “the account of cognitive science we have at the moment … when we read the practice-facing guidance and the Early Career Framework and so forth”, in that “it’s a model of individuals and information”. Interestingly, that absence is not down to a gap in the research, says Perry, but a blind spot in the consumption of it.

The EEF review team included cognitive neuroscientists and cognitive psychologists, and Perry says that “what you see when you look at the wider literature is that there is a cognitive neuroscience and cognitive psychology of, essentially, relationships and emotions”.

There are “half a dozen studies” on the effect of anxiety on cognitive load, he continues, but “you don’t go into the Early Career Framework and see a little bit on anxiety and how teachers can build relationships and settle kids’ nerves … It’s in the cognitive science, it’s just not in the account of it.”

So what does this mean for how we train teachers to apply these methods? Fletcher-Wood, as someone who designs training programmes, stresses that we should never assume that we can adopt any educational research wholesale.

“Any time you think any single thing is going to solve all your problems, you’ve created a new problem for yourself,” he says, adding that the EEF review offers “worthy caution” in this area.

But at the same time, as a history teacher, he notes that educational research is not exactly flush with randomised controlled trials in his subject.

“The idea that I can wait for anyone to do randomised controlled trials in history - I could be waiting 20 years. There comes a point when you have to make an educated, informed decision about what to do based on the evidence. You have to extrapolate from what there is,” he says.

“If we know about how people learn, if we know that in 150 lab experiments people always forget things, it’s a bold move to say that’s something we shouldn’t teach teachers about, and it’s a bold move to say that’s not really important and is not going to apply to my kids because I’m teaching French, not maths or whatever.”

Where do we go from here, then? Is better training that will help teachers to understand the evidence base in more detail the way forward? Or might we have reached the end of a phase of explosive expansion in cognitive science-informed approaches, with the EEF review heralding a new era of scepticism?

Dixon argues that it is “fundamentally unethical to ignore the limitations” of the research, calling for “a national conversation around ecological validity and around the application of some of these ideas”.

Enser concludes that the EEF review feels like a warning to school leaders to “not dismiss cognitive science - to understand it and look at it in detail - but not simply to impose it”. Cognitive science approaches cannot function as “abstract theory - just telling people this is the thing you should do”, he argues, but instead must be supported by reflection and training and observation by leaders, through a “cycle of theory and practice”.

Having these conversations and reflecting on the application of the methods in detail is bound to take time.

However, one area in which the EEF review might have an immediate impact is in the Early Career Framework, which the EEF has been involved in developing. According to EEF research and policy manager Harry Madgwick, who supported the review summary, some of the “nuances” highlighted in the evidence review are “already making their way into the development of the materials”.

“The full induction programmes that are being developed at the moment - I’m aware that some of them have already integrated some of the findings from the review,” he says.

Meanwhile, Becky Francis, EEF chief executive, says that the review serves as a timely reminder that “applying the findings from educational research in classrooms is a challenge. However, it’s one that teachers know to be worth tackling in order to maximise the impact of their practice and, in turn, accelerate pupil progress”.

Perry adds that because we now know where further research is needed in this area, we can direct resources towards filling the gaps.

“Subject specificity has been sorely lacking from any of the basic science … It would be really nice if the EEF, for example, funded some trials looking at what it looks like in geography, what it looks like in primary - and actually try to fill some of those gaps in the applied evidence base,” he says.

And when it comes to training for teachers, the review’s findings of classroom studies suggest that “doing this at scale, in really realistic conditions, is under-problematised”, explains Perry, highlighting that the EEF’s CPD review is due to report in October.

“Even if you support the [cognitive science] principles … there’s still quite a lot we need to know about what a good CPD programme looks like,” he points out.

More broadly, Dixon thinks the issues around cognitive science research highlight how “schools have been encouraged to use research and evidence [more generally] without a wider conversation about how they should approach this”.

“It is important that we are thinking carefully and systematically about what the research tells us, what it doesn’t and what we don’t know,” she explains.

The concept of ecological validity is “not widely understood” in schools, says Dixon, but the EEF review “highlights the importance of this concept and [of] recognising the limitations and possible risks that might be involved in the large, wholesale adoption of practices that have only been evaluated in small samples in some age groups”.

Where we go next may therefore depend on what teachers learn about this research from increasingly putting it into practice in real classroom situations in their own school contexts.

As we enter the new academic year, the EEF review may have the effect of putting the brakes on the use of cognitive science-informed approaches somewhat. However, just as the initial popularisation of the methods began with teachers, it is now ultimately up to teachers to decide whether to bring the cognitive science vehicle completely to a halt or to switch gears and take it forwards into a new phase of more cautious progress.

John Morgan is a freelance journalist

This article originally appeared in the 3 September 2021 issue under the headline “Brain storm”

You need a Tes subscription to read this article

Subscribe now to read this article and get other subscriber-only content:

  • Unlimited access to all Tes magazine content
  • Exclusive subscriber-only stories
  • Award-winning email newsletters

Already a subscriber? Log in

You need a subscription to read this article

Subscribe now to read this article and get other subscriber-only content, including:

  • Unlimited access to all Tes magazine content
  • Exclusive subscriber-only stories
  • Award-winning email newsletters

topics in this article

Recent
Most read
Most shared