No room for reason in this arbitrary idea
I recently suggested to a senior government official responsible for developing policy on whole-school targets that the only objective numerical target for my school was 100 per cent. He rejected this on the grounds that all targets had to be "reasonable".
But all numerical targets applied to organisations are, to a greater or lesser extent, arbitrary. Take, for example, the sequence of results in the graph below. It could represent the number of GCSE higher grade passes, the frequency of staff lateness, or the amount of toilet-paper consumed. Like most natural processes its performance is relatively stable, but deviates slightly from its average value. This example has a mean of 60 and a standard deviation of 10.
Standard deviation is a measure of the spread of results such that very few, if any, will lie beyond three standard deviations either side of the average (in this case, they will fall between 30 and 90). We can use that spread to calculate the chance of obtaining any particular range of results.
So where might we choose to set next year's target, and what are the chances of achieving it?
Only a wimp would propose a target at or below the average, but it is instructive to note that the chance of hitting or doing better than the mean is only 5050 in any stable process, notwithstanding the so-called "standards expect-ed" in tests at age 11 or 14 or whatever. Nevertheless, let us plump for the "reasonable" target of 70; after all, we exceeded it in 1991 so should be able to do it again. But this figure, being one standard deviation above the mean, has only a 16 per cent chance of being met or bettered. If we were to set a more "stretching" target, say at 80, the probability of success falls to around 2 per cent.
Only if we set the target at 30, lower than any previously recorded performance, could we be totally confident of achieving it, all other things being equal.
Given this distribution of probabilities, we need powerful methods to defy the odds against meeting even our "reasonable" numerical target. So, what methods do we have at our disposal?
* Trust to luck. Just because the odds are not good doesn't mean we can't be lucky and keep the wolf from the door for another year at least.
* Distort the data. Redefining the events which contribute to a target number is a method which has a shameless history in government (redefining unemployment), business (altering delivery dates), the police force (categorising burglaries as criminal damage) and in education (redefining unauthorised absence).
* Distort the system. This is an increasingly popular procedure of applying extra time, personnel or materiel to the point in the system that generates the target number, even if doing so damages other important, though untargeted parts of the system (for example, focusing on those students whose likely GCSE performances cluster around the CD boundary at the expense of the most and least able).
* Improve the system. This strategy is most actively pursued by quality-oriented organisations where central processes are studied in minute detail to generate improvement upstream of the output. It requires a clear understanding of the processes involved, encouragement and time to experiment, and a long-term commitment to the methodology - none of which seem to apply in the current educational climate.
So, having selected our target by intuition, by extrapolation or by bench-marking - all suitably arbitrary means - we apply our method and await the outcome.
Assuming that we hit our target for next year, what will it mean? Can we claim to have improved our performance? Unfortunately, we won't be able to say with any degree of certainty until we have at least another five points on our graph to allow us to compare its average performance with that which we were trying to improve. Can we wait that long? Can we be certain that our world will not have changed so significantly in that period as to have distorted the effects of any changes we have made?
The fact is that we cannot draw any useful conclusions about how to improve performance from meeting annual numerical targets, no matter how reasonable. But, I suspect, that is not the source of attraction.
The real beneficiaries of the use of annual numerical targets are those in charge. They continue to work within the wholly discredited idea that quality improvement comes primarily from the workforce putting more effort into the system. Those who can't do it should be removed, while those who can merely require a proper sense of direction and more motivation.
As subscribers to the 'carrot and stick' school of leadership, they believe that the necessary motivation will appear if they, those in control, apply suitable levels of reward and punishment, praise or shame. Being fair-minded and humane, they would not wish to be arbitrary in their actions, so need some objective means of deciding when to reward and when to punish.
Numerical targets seem to provide the ideal instrument for this purpose. Consequently, they are content to allow the workforce to set its own numerical targets - within appropriate parameters, needless to say. What could be more "reasonable"?
Robert Dupey is head of the Ecclesbourne School, Duffield, Derbyshire