Eight lessons we can learn from the latest Pisa rankings
Rob Coe, professor in the School of Education and director of the Centre for Evaluation and Monitoring at Durham University, writes:
The sound and fury is starting to subside following the latest triennial Pisa study, so perhaps it is now time to focus on what lessons we ought to draw.
1. There are clearly real issues comparing countries with different systems. This is the case in terms of education systems themselves, data collection and the fit between what is tested and different curricula. There are the difference in attitudes to study and testing, such as the now well-known issue of pupils answering multiple choice questions differently, with those in some countries leaving them, while others guess. Most of the cross-country comparisons depend on the problematic assumption that test items can be translated in a straightforward way, and that they perform the same way in different contexts.
2. But, whatever its limitations, Pisa is the best evidence we have of its type. Of course there are problems with sampling and context, but it would be wrong to dismiss its findings completely. The challenge is to apply the right level of caution – not too much or too little. The debate it provoked is challenging and illuminating, despite any limitations.
3. The third and, in many ways, the crucial lesson for England and the UK is that our Pisa results show little difference in maths, English and science over recent years.
4. One implication is that improvements in GCSEs probably do not reflect real sustained improvement over time. I realise that this lesson may be challenging, but Pisa gives us independent, internationally benchmarked time-series data that only point towards one conclusion.
5. An interesting feature of the debate around Pisa is the way that different people project diagnoses onto the findings. The fact that UK Pisa results are unchanged could be good or bad; could be blamed on – or credited to – the previous or current government. It could be attributed to any number of changes we have (or haven’t) made or to selected differences between us and (selected) high-performing countries. The point about all these claims is that they are completely independent of actual Pisa results. The lesson is that these “explanations” are spurious and unhelpful.
6. Related to this, we must stop cherry picking from high-performing systems. Almost every characteristic invoked as the recipe for their success can also be found in some low performer. Even features associated with high scores may not be things we could copy. And, if we want examples of excellent practice, we can always find them much closer to home.
7. We need to look for knowledge on how to improve our education systems in the evidence, not in folk wisdom, common sense or what seems plausible. Too much of this kind of thing has failed to make any difference in the past.
8. Finally, even when the evidence is clear about how to improve performance, the challenge remains how to make changes effectively. This is something we can only learn by trying things out and evaluating them robustly.