Skip to main content

Lies, damn lies and government statistics

Last week's national test results provided plenty of ammunition for the politicians. But remember, says James Tooley, the levels of attainment were guesswork

An extraordinary bout of amnesia is afflicting the debate about the national test results. Through the media we've been told by all parties that half of all pupils aged 11 and 14 have "sub-standard" results, that children are "falling behind", which is an "appaling" indictment of the "shocking shortcomings" of 16 years of Tory rule, or if you prefer, of "progressive teaching in primary schools".

The results show nothing of the kind. All they show is that the Government's National Curriculum Working Groups got it wrong when they guessed the levels of attainment for certain age groups. No need for political gloating or educational heart-searching. All that is needed is a little jog of the collective memory.

Recall TGAT - the Task Group on Assessment and Testing, set up by Kenneth Baker, chaired by Professor Paul Black - which reported in 1987? This was the first to suggest a 10-level scale, gave a bold diagram to show how children of various ages might be mapped onto it, but was quite specific that this was only "rough speculation" about the limits within which pupils of any age would fall. What could be more explicit than that?

Nor were they being modest, for it was hard enough to judge whether any subjects would fit into their framework, let alone the expected achievement levels of average pupils.

Working groups were then employed to flesh out this framework for individual subjects. Duncan Graham, later Chairman and Chief Executive of the National Curriculum Council (now incorporated into the School Curriculum and Assessment Authority), started out as an ordinary member of the Mathematics Working Group. In his book, A Lesson For Us All, he describes how the group was split into "progressive" versus "traditionalist" factions: "Nobody could agree on anything," he tells us.

Certainly they couldn't agree on what levels of attainment were to be expected from average children. After six months, they had made virtually no progress, and their interim report was received by Kenneth Baker with dismay. Various members resigned, and Duncan Graham - a self-confessed non-mathematician - was asked to oversee the final process.

With just six months in which to do a year's work, the final report was miraculously submitted on time. But Graham was very careful to note, in his introductory open letter to the Secretaries of State, something which everyone was aware of then but which has now been curiously forgotten: "The biggest challenge has been to pitch the levels of attainment within targets at the right level... As the results of the first pupil assessments against the targets we propose come in, it should become clear where adjustments to our recommendations may need to be made."

This is crucial. For the test results just published are the only national tests for those age groups - previous years having been disrupted by boycotts. The mathematics working group noted that: "It would be surprising if we have got everything right." What these results show is that they didn't get the expected levels right. Not surprising really, given the paucity of research available, the tight deadlines and the political pressures they worked under.

Science and English told similar stories. Indeed, the phrasing in the final science report is almost identical: "The most difficult aspect of our task has been the definition of specific, age-linked levels of attainment within attainment targets... It will be important to monitor closely... so that necessary adjustments are made." The English report was more terse, but the underlying sentiments were the same: "You will know that our timescale has been somewhat constrained... we have not been able to test out our recommendations before finalising them."

You might think that, with all the National Curriculum and testing revisions over the last few years, detailed analysis of these expected levels had been undertaken before. But, at best, this only could have been done using teachers' subjective assessments, and could not have given a national benchmark. No, the only way of analysing the expected levels of achievement was through a nationwide test, the results of which are only just before us.

Moreover, throughout Sir Ron Dearing's extensive consultation process, a great deal of dissatisfaction was expressed toward the 10-level scale. Dearing reports that teachers, teacher associations and Chief Executive Officers were evenly divided over whether to retain the 10-level framework. Alternative approaches were mooted, but it was political timing and professional compromise that won through in the end, not the virtues of the system chosen.

So, no-one can make any inferences from the current national testing results. No-one is to blame, no-one to praise. Progressive teaching methods in primary schools, Labour councils, the Tory Government, teaching unions, teachers and pupils, all escape unscathed from these results. What the results do give us is a benchmark for future years, a baseline which can, if required, be used to assess whether standards are rising or are in decline. But I can't help thinking that they also give us an important lesson in the foolhardiness of letting governments intervene too closely in areas of education where they do not belong.

Dr James Tooley is director of education and training at the Institute of Economic Affairs and a research fellow at Manchester University

Log in or register for FREE to continue reading.

It only takes a moment and you'll get access to more news, plus courses, jobs and teaching resources tailored to you