How to measure metacognition in schools

While teachers increasingly appreciate the importance of getting students to ‘think about thinking’ – and how to plan, monitor and evaluate their learning – children’s metacognitive skills are notoriously difficult to assess. But new research could change all that, finds John Morgan
24th September 2021, 12:05am
How To Measure Metacognition Skills In Schools

Share

How to measure metacognition in schools

https://www.tes.com/magazine/teaching-learning/general/how-measure-metacognition-schools

A group of children are carefully planning the path they will take through a zoo. They have a long list of animals to feed. Getting all the way from the goats to the elephants, via the walruses - without missing the jellyfish or llamas - is no easy task.

Thankfully, no animals will go hungry if these children’s plans don’t quite work out. This is not an exercise on a school trip but a paper-based problem-solving task designed by researchers at the University of Cambridge and Virginia Commonwealth University, aimed at measuring metacognition in children.

Metacognition - often dubbed “thinking about thinking” - is increasingly being seen as a big deal for schools. One project, which was funded by the Education Endowment Foundation (EEF) and tested a metacognitive strategy for writing, found an estimated nine months’ additional progress for pupils who took part. And the EEF’s evidence review on metacognition, published in 2020, recommends that teachers “explicitly teach pupils metacognitive strategies, including how to plan, monitor and evaluate their learning”.

To do that effectively, though, teachers would have to work out how far pupils’ metacognitive skills go and, thus, how much support is needed - yet metacognition measurement is notoriously difficult. This is where the Cambridge and Virginia Commonwealth researchers say they have hit on something new.

Where most tasks used to judge metacognition focus on memory, or carry out labour-intensive observations using video recordings and subjective self-reporting questionnaires, the Zoo Task, according to the research authors, “allows researchers to holistically metric multiple metacognitive skill components on a large-scale basis”.

What might this mean for teachers in the classroom?

Michelle Ellefson, a reader in cognitive science at the University of Cambridge Faculty of Education, whose current research projects focus on the role of executive functions in school achievement, was one of six authors on the paper. The executive functions are a key part of the metacognition ecosystem, she says, as they cover “our ability to deal with information and work with it, including our ability to ignore distractors, our ability to switch between tasks”, a framework that metacognition falls within.

Metacognition: the ability to assess pupils’ metacognitive skills

These abilities have clear relevance in terms of how children learn in the classroom.

“A lot of additional work that we do as learners to make learning possible isn’t necessarily about how well we understand some complex science or arithmetic computation, but it’s about how [we’re] dealing with the information we’ve got, ignoring distractors so that we can do something, even being aware of what level of understanding we have to reach,” Ellefson explains.“It’s a really complex process.”

That complexity is just one of the reasons why metacognition is difficult to measure - and why research in this area has so far been rather limited and for the most part focused on memory.

“We felt that thinking about thinking is not just about identifying how much you can remember,” says Ellefson.

The Zoo Task study included 204 children, aged between 7 and 12, from elementary schools in “high-poverty urban areas in the eastern United States”, mostly African American children, who were taking part in an after-school chess programme.

In the task, children were asked to help the zookeeper feed some animals, using the shortest route between the cages, while being sure to stay on the paths and avoid going into the cages. They then drew lines to indicate what they believed to be the most efficient path, with each test increasing the number of animals and, therefore, the difficulty.

Before and after each trial, participants made prospective and retrospective judgements about how well they did.

The children’s completed paths were judged on accuracy, in terms of shortest path possible, while the researchers also used a computer programme to code the paths on various categories: a clear route, all animals seen, using the start and finish points, not backtracking, evidence of a strategy (eg, ticking off from their list the animals they had fed).

Another element of the study was the attempt to assess “metacognitive monitoring” - the “awareness of one’s cognitive processes and evaluating progress on an ongoing task”, as the paper puts it - or “this idea that you’re aware of how you’re doing in the moment”, as Ellefson expresses it.

The children who deployed that skill “realised they had gone on a path that wasn’t as good as it should be, and they erased it and started it over,” she says.

The children also completed a more standard metamemory (awareness of one’s own memory) test of metacognition on their recollection of pairs of images. The Zoo Task results matched up well with those scores, suggesting it was reliable while, the researchers argue, additionally measuring a far broader range of metacognitive skills than a metamemory test.

The big question

With schools increasingly choosing to focus on instructing pupils in metacognitive strategies, having tools to measure the full range of these skills is important, as it will help teachers to better understand if that instruction is working or not.

It’s possible, though, that these tests would not show much, or any, progress in pupils: when it comes to how far metacognition can actually be taught, Ellefson says that there are still some big question marks.

“At some level, this is the big question for those doing research in this area,” she says, adding that she hasn’t seen a lot of interventions “teaching kids what [metacognitive] strategies might work and why”. While there are “a lot more interventions in this idea of executive functions”, it is “a mixed bag” in terms of results, which is “incredibly disappointing at some level”, she continues.

However, “it does seem that there’s definitely variation among children - and that children who do better in these cognitive skills do better in school”, she says, adding: “We haven’t figured out interventions that actually work.”

The findings of the EEF-funded project mentioned earlier are clearly promising, though. And Ellefson also highlights the thesis recently completed by one of her doctoral students, Helen Barsham, a headteacher.

Barsham applied a “metacognitive intervention” with a class of Year 6 pupils, “where she was telling them how memory works and why there are certain things that are really helpful for doing better in terms of memory”, particularly practice testing.

“Her findings are basically that it [the metacognitive intervention] tends to work at reducing test anxiety - but only for those who are the most anxious” about exams, says Ellefson. It’s an example “of ways that teachers might implement these kinds of findings about metacognition and how theories we have about learning might be useful in the classroom”, she adds.

In terms of where the findings of the Zoo Task study might be taken next, Ellefson says that this approach needs to be made more accessible to teachers before it can be used in the classroom.

“Quite often, what comes out of cognitive psychology isn’t necessarily right for school settings,” she says.

“If we’re going to do an intervention, then we’ve got to be able to do something that is not labour intensive. In terms of who is coding it, it has to be easy to deliver in classrooms.

“I suppose that’s why I see this sort of task as important for the future of research and looking at interventions in this area.”

Switching the paper Zoo Task to an app-based system - allowing researchers using it to more efficiently test the efficacy of interventions to improve metacognition - is “probably how this tool gets used first”.

“Then, after having some sense of how interventions might work or the differences across ages, that’s when it could be more usable for teachers,” says Ellefson.

Measuring metacognition among children - by providing more data on a typical six-year-old, for example - would give teachers “better informed information” about whether a child is struggling or on track for their age, she adds.

Without getting too meta, all this might be food for thought in the future, when you’re thinking about thinking about thinking.

John Morgan is a freelance journalist

This article originally appeared in the 24 September 2021 issue under the headline “Tes focus on...Measuring metacognition”

You need a Tes subscription to read this article

Subscribe now to read this article and get other subscriber-only content:

  • Unlimited access to all Tes magazine content
  • Exclusive subscriber-only stories
  • Award-winning email newsletters

Already a subscriber? Log in

You need a subscription to read this article

Subscribe now to read this article and get other subscriber-only content, including:

  • Unlimited access to all Tes magazine content
  • Exclusive subscriber-only stories
  • Award-winning email newsletters

topics in this article

Recent
Most read
Most shared