The world before the Pisa rankings was a bit like the non-European world before it encountered the Victorians. Until the stern men in large hats and big gunboats steamed in, non-Europeans naturally assumed that their way of doing things was the right way of doing things. Then the Victorians arrived to demonstrate the superiority of manufactures, cricket and trousers and everything changed.
It was like that in education until the 1990s. Every country assumed it had the best education system in the world because no one made statistical comparisons. Then the Organisation for Economic Cooperation and Development (OECD) rolled out its Programme for International Student Assessment (Pisa) rankings, which revealed that Finland, Singapore and South Korea were far better than everyone else. Countries such as Germany that had considered themselves educational titans were shocked to discover that they were the pedagogic equivalents of England's football team. Ever since, education ministers worldwide have sought with increasing earnestness to benchmark their education systems against the Pisa gold standard.
So it comes as a shock to discover that many experts consider Pisa statistically invalid and fundamentally flawed, or in their words "not reliable", "utterly wrong" and "useless" (pages 28-32). Even the OECD admits that problems with the sample data used mean that "large variation in single (country) ranking positions is likely".
To put that in context, and depending on which data are used, the UK could finish anywhere between 14th and 30th, Denmark between fifth and 37th, Canada could be second or 25th and Japan anywhere from eighth to 40th.
Pisa has two big problems, according to its critics. The first is highly technical: the statistical model the OECD uses isn't fit for purpose. The second is more comprehensible: different students in different countries weren't asked the same questions. Indeed, it seems that more than half the teenagers taking part in Pisa in 2006 were not tested on any reading questions whatsoever, which didn't stop the OECD giving them a score.
The organisation's response to this criticism has been less than helpful. It has, its critics claim, either refused to engage with them or airily retorted that its main work is analysis of what works best, not mere rankings. That is disingenuous. Whether the OECD likes it or not, politicians get excited by the number the organisation gives their countries in its table, not by its scholarly surveys. Ditching the rankings, as some suggest (page 10), might appease academic purists but it would make Pisa far less attractive to policymakers.
Does this mean that the Pisa rankings should be ignored? No, it does not. The OECD claims with some justification that there will always be problems in trying to compare more than 50 different systems and that no mechanism will be perfect. But that does not mean the OECD's methods cannot be refined or that its critics should be met with silence. Mystery doesn't help establish credibility.
Its critics should bend, too. Rankings, however crude, are here to stay because people want context; they want to know where they stand and what they can do to improve. And no amount of bellyaching over imperfections is going to change that.