Topping's formula is as follows: the percentage of the S5 roll passing three or more Highers multiplied by the result of dividing the school's FSM percentage uptake by the national average FSM figure which is 16 per cent.
The formula seems completely arbitrary and produces many ludicrous results.
For example: Banchory Academy (with 51 multiplied by 1.2 divided by 16, giving a weighted score of 4.2) faces an impossible task, with its present low FSM percentage, in getting near the "top" 50 schools which all have scores over 25.
The Scotsman, enshrined Topping's results in "top 50" and "bottom 50" league tables. When one reads Topping's information sheet describing the advice he gave to The Scotsman, one is led to suspect an elaborate hoax.
He details many of the unreliabilities that spring from using FSM figures as the sole means of adjustment. He knows that FSM is an imperfect measure of socio-educational disadvantage because of changing unemployment patterns, local attitudes to free school meals etc. He is aware of the need for measures of advantage as well as disadvantage: schools with similar FSM often differ in the proportion of children from educationally advantaged homes.
He writes: "the scalar properties of FSM are unknown - is 50 per cent twice as bad as 25 per cent?". He realises that the percentage of Higher pass rates in small schools tends to be unstable. His study was based only on 2003 results.
So why did Topping use this method? He knows that there is a quite high negative correlation (around minus 0.7) between FSM uptake and high school public exam performance in Scotland. He may have reasoned that, in spite of dubious reliability, FSM figures are all we have to work with at the moment.
Although they did not use the same wildly overcompensating formula as Topping, this was the attitude of the HMI audit unit at the time of target-setting in the mid 1990s. The view was that FSM figures alone could be used to produce at least some "levelling of the playing field" and allow fairer comparisons to be made between the effectiveness of schools.
This view is profoundly mistaken . The numerous sources of unreliability identified by Topping can be alternatively described as confounding variables.
These are factors which influence the results in an uncontrolled, unmeasured way. Because the influence of the confounding variables and the influence of school effectiveness are inextricably mixed, there is no way of telling if a difference between two schools in their adjusted results represents any real difference in school effectiveness.
The FSM methods mentioned therefore provide no information on school effectiveness and league tables derived from them are worthless.