Did you know that in Australia, the federal government has introduced software designed to stop companies using school-by-school exam results to produce national league tables? Or that in the Canadian provinces of Ontario and Alberta, the authorities have developed school-by-school accountability that is, in the words of a right-of-centre British think-tank, "the antithesis of England's big stick"? Or that in Shanghai - the top performer worldwide in last year's Programme for International Student Assessment (Pisa) tests - school-by-school data tends not to be published?
To listen to the debate in England, one would think the adoption of test-driven accountability for schools was inevitable. Politicians and officials have led us to believe this after nearly a quarter of a century's experience here of results-based punishment and reward. They argue that the logic of this approach is inescapable, even if teachers may not like it: schools need to be held to account for their pupils' results; parents need information on school quality; tests and exams are to be the vehicle; schools with poor results must be singled out for intervention in a bid to raise standards.
It is true that this model now prevails in certain countries including England, the US and, in the past three years, South Korea. The Bew review of key stage 2 assessment, published this week, was also instructed to stick with the "rigorous" English model here.
Readers may be surprised that other countries seem highly resistant to this approach, and many of them boast good results in the best-known system of international tests. That was my reaction after carrying out research for heads' union the NAHT on how some of the most successful countries in tests including Pisa use test-based accountability.
Of the 10 countries I looked at, only two - the US and South Korea - now have truly high-stakes accountability for schools. Japan, Finland, the Netherlands, New Zealand, Canada, Germany and France do not. Nor does Australia, although it might be more accurate to say "not yet" in its case.
Let us start with the top performers in the recent Pisa tests. Finland is a well-known case study. It has no inspections, no pupil-by-pupil national tests, and a high-trust system based on training a highly skilled teaching workforce.
Another high performer, Japan, also lacks school-by-school league tables. A report by the Central Council for Education for the Japanese ministry of education preceding the introduction of the latest tests said that the country should "avoid school ranking and unhealthy competition".
In the Netherlands, pupils are tested aged 12 and schools told the results to help with self-evaluation. But schools release scores only if they choose to.
In Australia, the federal government recently introduced a national testing system, with school-by-school results published on a website. But this controversial approach is different to ours, with each school's scores contained within pages of contextualised information and the government seeking to prevent this being used to compile simple rankings, after pressure from the unions. Politicians have opposed using data to rank schools.
Although a concept of accountability for school quality exists in most countries, I found a schism between how results are used in the AngloUS model - where schools can face takeovers and heads can lose their jobs when results are bad - and elsewhere, where the approach tends to be more supportive.
In Germany, painstaking reforms stemming from the country's poor performance in Pisa 2000 and facilitated by the federal government have pushed for the introduction of curricular standards, stating what pupils should know. But a 2005 report for the government said: "Test results ... must not be linked to the allocation of resources to schools or the promotion of teachers, because this undermines the beneficial effect of the tests on quality development."
The 493-page report added: "Teachers, as the most important guarantors of reform, must be won over." A German academic told me that, if a school failed, it was seen as a failure of the system, not of an individual institution.
Canada is perhaps the most interesting case. Most provinces there have testing systems, with results made public. But there is official resistance to the use of scores to rank schools. Last year, Ontario's Education Quality and Accountability Office, which publishes school-by-school data profiles, issued a statement saying that "a school's rank does nothing to help parents, students, the community and other education stakeholders understand what's going on inside the school".
A 2008 report by the British Conservative-leaning think-tank Policy Exchange concluded that the states of Ontario and Alberta had "deliberately sought to develop an accountability framework ... that is, in their own view, the antithesis of England's big stick". The report adds: "The Ontario system has chosen to focus on the provision of support structures. Where pressure is applied, positive rather than negative incentives are used".
It also points out that the philosophy in the two provinces centres on promoting improvement and professional learning within existing school structures, rather than what a leading official in Ontario described as the "unsustainable" English model of subjecting schools to "takeovers" by outside organisations.
There is no inevitability to the English model, and countries taking a less confrontational approach on performance can do well. The method pursued in England and the US - in which politicians seek to root out failure through dramatic structural change of school governance arrangements, often finding themselves and unions at loggerheads - is far from universal.
Hopefully, this understanding will influence politicians' pronouncements in this country, but I'm not holding my breath.
Warwick Mansell is the author of Education By Numbers: The Tyranny of Testing.