The Government's method of getting primary children to succeed academically is very simple. It goes like this ...
All schools have sophisticated computerised progress-tracking software. Teachers record children's levels in English and maths on the school's system at least once a term. The computer prints out graphs and charts to show which children aren't progressing quickly enough to reach the expected level 4 in Year 6. The results are handed to the school's assessment co-ordinator, who pores over them and hands them to the senior management team (SMT).
The SMT reports to the head. If there is a problem, the head sends for the offending teacher and says: "Mrs Smith, are you aware that Janice, Elspeth and Raheem are lagging behind in their maths, and we need to do something about it?" A couple of teaching assistants are diverted from other tasks and assigned to the trio for an extra boost - enough to make sure they are guaranteed an eventual level 4 - and that is that. Bob's your uncle. Another sure-fire success. Thank heaven for computerised assistance.
Except that, in reality, life isn't like that.
I have just heard from a reader whose wife is a middle-school headteacher. The data agenda is causing her to tear her hair out with frustration. "There seems to be a significant loss in translation regarding the use of statistical data," she says. "Children come to me at age nine and have already been tested at key stage 1. On entry, we assess each child and these assessments are set against the KS1 results, together with work from their first school. In almost all cases, the staff find that their entry assessments are lower than those provided."
"By the KS2 assessment," she continues, "each cohort will have lost a number of able children to other quasi-selective schools and they will have been replaced with generally less able ones. So what should the assessment of overall progress be compared with? On the face of it, it seems to be a percentage comparison with 'national averages' as interpreted by Ofsted and the local education authority, but the structure and composition of the cohort has changed and the starting point is therefore simply not consistent."
I am sure virtually all primary teachers will sympathise with her conclusion. "There seems to be no room for the idea that a child's progress will have some randomness and unpredictability, that performances in tests will have an innate variation," she writes. "What was meant to be a helpful expectation of a child's likely progress ... has become a target, and God help the teacher if they don't achieve it. Cohort variability, individual result distribution profiles, random deviations from the expected ... none of these are given any consideration."
I couldn't agree more. Another reader, in desperation, has found the shortest route to peace through the data quagmire. "My headteacher examined my data," she says, "and hauled me into his office. I've worked my back off with this class - a particularly challenging one - but I was given a 20-minute castigation because several children, all special needs, hadn't made the progress he needed. I was so angry, the next time around I just massaged my results a little."
I bet she isn't the first. Trouble is, of course, the head was probably fearful too. That is the climate this awful data agenda has fostered.
Mike Kent is headteacher at Comber Grove Primary, Camberwell, south London. Email: firstname.lastname@example.org.