From the PICSI (pre-inspection context of schools indicator) annex guidance to Office for Standards in Education inspectors in evaluating results of standard assessment tasks: "The percentage of pupils reaching specified levels in the key stage tests . . . will vary from one year to another, depending not only on variations in the quality of the teaching and learning but also on variations in the abilities of the pupils.
"The latter variations are likely to have the greatest impact on the variability of a school's key stage tests where cohort sizes are small . . . For very small cohorts - below 20 - inspectors should place very little weight on the key stage test results for one year."
In my county, more than a third of primary schools have 20 pupils or fewer per year group. Even for substantially larger year groups, the guidance urges caution. A change of 1 per cent is said to be the minimum significant change for a cohort of 60.
Even a 15 per cent variation in a group of this size requires other supporting evidence before it can be considered as a firm indicator of changes in the standard of achievement of the school. Only about a dozen schools in my county have a cohort size of more than 6O pupils.
So why are these unreliable figures used to compile annual league tables?