It seems, from a conversation with one of my teacher friends, that some school managers are as innumerate as the general population. It's not that they can't do arithmetic, but that they don't know what numbers mean.
The current Scottish Office Education and Industry Department guidance to schools, Raising Standards - Setting Targets, explains how candidates of a particular Higher grade subject may be categorised according to their "prior performance" as measured by their GPA.
This little piece of jargon is the grade point average - the average grade obtained by a student in all his or her Standard grade subjects.
Students with the same GPA are said to be "similar". Then the average Higher grade in any particular subject that these students subsequently achieve is taken as the "realistic minimum target" for the following year's "similar" candidates.
The difference between this target and what the candidates actually achieve is called their "relative" progress, and is used to determine the value added indicator for their subject departments.
In my friend's school, the "target" was interpreted as the result that a candidate with this particular GPA was expected to obtain. Then the subject department was criticised because he failed to achieve it.
The candidate in question attained a GPA of 2.5 the previous year. Because "similar" students nationally had obtained an average range score (or band) of 9.0 in the Higher grade examination for the subject, it was argued that this candidate "should have obtained" this result too.
This is a complete misinterpretation of the word "average". If band 9 is the average result, up to half of the candidates must do worse. An analogy may make this more obvious. When two dice are thrown, the sum could be any value from 2 to 12. The average sum of a large number of such throws is 7.
This is also the most likely result from a single such throw - it has a 1 in 6 chance of occurring.
However, we would not be surprised if the sum of the two dice was different from 7. A total of 8, for instance, has a 5 in 36 chance of occurring. So does a total of 6. Even a double-six, with its mere 1 in 36 chance (3 per cent) turns up occasionally. Likewise, if we were asked to predict the Higher grade result an unknown candidate with a GPA of 2.5, our best guess would be band 9 (grade C). But we would also know that his or her actual band could be anywhere between 1 and 14.
Analysis shows that the difference in "relative progress" between what candidates are expected to get, on the basis of their GPA, and what they actually receive varies between about -8 and +8.
Its standard deviation is nearly 2.0. In statistics, this means that there is a 1 in 6 chance our unknown candidate will get at least band 7 (grade B).
Even a band 5 (grade A) is not impossible. It's about as likely as throwing a double-six with the dice. In the same way, there is almost a 50:50 chance the pupil will fail (band 10 or more).
About a sixth of all candidates will be two or more bands below their "target". This means that nearly all subject departments in most schools will contain some "under-achievers".
Why is it assumed that such "under-achievement" is the fault of the subject department? This is like criticising a dice thrower for getting only a double two. The result is well below average but this is hardly within the thrower's control.
Statistical analysis shows that only a tiny proportion of the "relative progress" made by candidates can be attributed to the schools they attend.
It is small because only relative progress is being measured. Schools have a huge effect on students' achievements. But if all schools have similar effects, the differences between them will be tiny.
There are many reasons, other than teacher ineffectiveness, why S5 students fail to reach their potential. It is a time of immense personal growth, social distraction and emotional turmoil.
Hence it is unjustified to blame a subject department if particular candidates fail to reach their "targets". Given the way that they are calculated, half of all candidates will fail to reach them anyway.
So perhaps the measurement is at fault, not the teachers.
Clearly, it is practical to use "prior performance" when selecting applicants for subsequent courses - for example, in further and higher education.
It also helps in advising students which subjects to attempt at the Higher grade. But the SOEID guidelines specifically warn teachers against using GPA as the sole determinant is this process.
Their message is that "the grade point average can be used to set realistic targets, to monitor and to support the pupil through the year". This is very different from using them for "shaming and blaming" subject departments.
We cannot all live in Lake Wobegon, where all the women are strong, all the men are good looking and all the children are above average.
Bob Sparkes is a lecturer in education at Stirling University.