Skip to main content

Testing is not enough to measure standards

David Hawker (TES, February 16) claims that "national testing can provide an overall picture of national performance, and give each school a picture of its own comparative achievement". Well, up to a point, perhaps.

But how far does that point go? For example, national testing, as currently conceived, cannot monitor test standards in order to decide whether apparent changes in performance over time are due to changes in pupils' learning or in the difficulty of the tests. It is not sufficient that "a great effort is made to keep test standards constant" because the questions are changed from year to year. The examination boards, with a similar problem, have been making such efforts for years and are still far from being convincing.

Tests which monitor standards must have a large proportion of constant questions. By such means the Assessment of Performance Unit, in comparing results from its 1982 and 1987 surveys, showed how the mathematics curriculum in both primary and secondary schools had broadened following the publication of the Cockcroft Report in 1982, but that the achievement of pupils improved in some areas of mathematics and declined in others. It also found that one important topic had improved in secondary schools and fallen in primaries.

These results emphasise the importance of monitoring over a wide range of the curriculum - certainly wider than the current national testing can possibly attain in two relatively short tests given to individuals.

One of the most significant findings of all the APU subject teams was that small changes in questions could produce large shifts in difficulty; short tests could, therefore, give a misleading picture in some topics.

A monitoring model contrasts with national testing in that only a small sample of pupils needs to be involved in surveys which would take place every three to five years. The breadth of monitoring is achieved by giving each pupil in the sample a selection of the total questions and summing over the sample. No comparison of individuals, schools, or LEAs would take place because the purpose is to produce a national picture of standards. Unlike the national testing the exercise could also look to the future by including questions on emerging areas. In all these ways it is a complementary exercise to national testing which looks at the levels of individuals, and as the Government presently requires, comparisons of schools in relation to the current curriculum.

All measures of educational performance have technical problems; the APU and any other model of monitoring is confronted with a curriculum that won't stand still while the measuring tape is being applied. But as long as the problems are known they can be handled to reveal changes, if the instrument is appropriate and changes are actually taking place - as they did in mathematics between 1982 and 1987, and as they may be doing now, some say in one direction, and some in the other.

DEREK FOXMAN Education consultant 59 Minster Road London NW2

Log in or register for FREE to continue reading.

It only takes a moment and you'll get access to more news, plus courses, jobs and teaching resources tailored to you