Can we really trust the tests?

16th February 1996, 12:00am

Share

Can we really trust the tests?

https://www.tes.com/magazine/archive/can-we-really-trust-tests
There is much misunderstanding about the national curriculum tests. After the l995 results, even The TES said in an editorial that “these results tell us little or nothing about . . . what children can and cannot do”.

The results actually tell us a good deal about this, as the School Curriculum and Assessment Authority’s reports on the 1995 tests showed. These used a nationally representative sample of pupils’ test work to highlight strengths and weaknesses. The key stage 2 report, for instance, pointed out that while many children could successfully calculate the arithmetic of a problem, they failed to address its wider context. The key stage 1 report pointed out that spelling errors were often the result of over-generalising from knowledge of spelling patterns. Such messages are intended to help schools to analyse children’s performance, and target improvements effectively.

The evaluation of the 1995 tests at key stage 2 highlighted areas needing further improvement. So, in 1996, a non-calculator paper has been introduced and children have been allowed an extra 10 minutes to complete each mathematics paper.

When such changes take place, great effort is made to keep test standards constant. Only when the curriculum changes, as in science at key stage 2 this year, will standards differ year on year. In 1996 the award of higher levels in the key stage 2 science tests will be tougher than last year. Similarly, the award of level 1 in English has become harder because of the revised curriculum. Everywhere else, the standards will be the same as in 1995.

The reason for this is simple - the tests are designed to reflect a national curriculum which sets explicit expectations of performance at each level. For teacher assessment, interpretations of the levels are given in the material SCAA has produced to exemplify standards through pupils’ work. For the tests, the award of each level depends on the mark obtained, and this is determined through statistical methods and expert judgment, much as in public examinations.

The close relationship between the tests and the curriculum means that they accurately reflect performance against the expectations set out in the national curriculum. Where children are expected to develop the ability to use mathematical operations in context to tackle real-life problems, the tests seek to assess this. In designing them, SCAA has sought to balance the need to tackle the basic skills rigorously with the need to encourage a rounded, high quality curriculum. The 1995 evaluations highlighted a number of areas, such as reading in key stage 1 and science in key stage 3, where the tests and tasks have had a beneficial effect both on standards of performance and on the quality of teaching.

SCAA’s recent review of assessment and testing drew several conclusions about future development . First, tests must be kept relatively stable. Minimising changes will ensure continuity of standards.

Second, they must continue to provide a valid assessment of the subject orders, so that they provide an accurate picture, in national curriculum terms, of what children can and cannot do. SCAA is developing further ways for schools to analyse test results, at pupil and school level, to get more detailed information about pupils’ attainment.

Third, more use can be made of the test results by schools. Some are already using them to analyse their strengths and weaknesses, in terms of teaching, curriculum planning and management. The evidence from the evaluations, however, is that secondary schools are finding it difficult to use the results of the key stage 2 assessments at transfer. We need to tackle this issue in particular, possibly by providing more detailed outcomes and support software, and accelerating the work we are doing on value added.

Since the introduction of national testing, there have been calls to re-introduce Assessment Performance Unit surveys, in the belief that the tests cannot track national performance effectively. An independent benchmark could be useful in showing that test standards have not slipped, particularly if national performance improves over the years. Alone, however, surveys of the APU type could never replace the national tests. National testing can both provide an overall picture of national performance, and give each school a picture of its own comparative achievements. It is a powerful engine for improvement.

David Hawker is an assistant chief executive of SCAA responsible for testing

Want to keep reading for free?

Register with Tes and you can read two free articles every month plus you'll have access to our range of award-winning newsletters.

Keep reading for just £1 per month

You've reached your limit of free articles this month. Subscribe for £1 per month for three months and get:

  • Unlimited access to all Tes magazine content
  • Exclusive subscriber-only stories
  • Award-winning email newsletters
Recent
Most read
Most shared