Test system fails to make the grade

20th February 1998, 12:00am

Share

Test system fails to make the grade

https://www.tes.com/magazine/archive/test-system-fails-make-grade
In the wake of the major study by Paul Black and Dylan Wiliam on the value of diagnostic assessment, Caroline Gipps says that learning is short-changed by the present national tests.

There is much talk about improving learning, but what we are actually doing is spending huge sums on testing. National assessments take place four times during compulsory schooling and there may be more to come. Performance tables give this form of assessment a high profile, but this emphasis leads to the relative neglect of another central function of testing: assessment for learning.

It is crucially important that teachers use assessment to help students to learn, and there are two main reasons (apart from being cheap, once the investment has been made on training).

First, learning style is shaped partly by how learning is assessed. Deep-learning approaches emphasise learners actively thinking for themselves and organising their knowledge; surface learning is more likely to involve accepting and reproducing content and ideas.

The strategic learner is the student who mixes the two approaches to best effect: for example, rote learning for vocabulary and spelling; deep learning for integrating ideas. The issue in assessment is the extent to which our tests and examinations encourage surface learning at the expense of deep learning.

While our aim may be to produce “active learners” who are reflective about the way they learn, in practice our teaching and testing may over-emphasise “surface learning”.

We need an approach to assessment which encourages the “strategic learner” to use a range of learning strategies combining “surface” and “deep” approaches. As timed tests are a poor vehicle for this, there should be more emphasis on tasks and teacher assessment.

In the national assessment programme the attempt to serve multiple-assessment purposes has come to grief: as the main purpose becomes the monitoring and comparing of schools, so the emphasis is placed on external tests to ensure the reliability of these “high stakes” assessments. The teachers’ role in assessment is then diminished and the type of assessment task limited.

There is also a danger that teacher assessment simply mirrors the national tests, which will further restrict teaching and learning approaches.

Second, testing in itself does not guarantee improved learning: far more important to learning is effective teacher assessment, coupled with feedback to the pupil.

This has been clearly demonstrated by Inside the Black Box by Paul Black and Dylan Wiliam, the new study commissioned by the British Educational Research Association’s Assessment Policy Task Group (TES Research Focus, February 6). This means that more resources should be allocated to the professional development of teachers’ assessment skills.

Teacher feedback as a means of “closing the gap” between actual performance, and desired performance, is central to the learning process.

Testing nationally every pupil at 7, 11, 14 and 16 has led to an expensive, narrow and high-stakes assessment regime. National testing at key stage 3 is particularly wasteful and difficult to justify. The results have little formative impact and relative performance of schools at KS3 is easily overshadowed by their GCSE results. Any value-added approaches are likely to use performance at 11 as the baseline, not 14, and results at 16 as the “output”.

A more effective use of resources at key stages 1 and 3 would be to sample national performance using techniques similar to those once applied by the Assessment of Performance Unit. This would allow a much broader range of skills and knowledge to be assessed in a “low-stakes” setting and would provide information about what pupils have achieved. There is also a case for providing a battery of tasks which teachers can use as a basis for teacher assessment and which could be reported to parents.

The reduction in coursework in both GCSE and A-level examinations, and the increased number of external components in the revised GNVQ, are worrying trends. Tests at 7, 11 and 14, written examinations at 16 and 18, coupled with narrowing subject specialism from 16 to 18 years, are likely to lead to “test-taking” learning styles quite apart from a restricted curriculum diet, as what is easily tested becomes what is taught.

While Sir Ron Dearing’s review of qualifications for 16 to 19-year-olds sought to broaden this diet by encouraging the uptake of key skills and proposing a National Diploma across several “areas of experience”, the response from subject specialists and from course-planners has been lukewarm.

There is no argument about the importance of external “reliable” assessment at certain stages of schooling; the key issues are what types of task those tests and exams contain, and the emphasis placed on the role of teacher assessment and feedback throughout the majority of pupils’ time in school.

Professor Caroline Gipps is dean of research at the Institute of Education, London.

Want to keep reading for free?

Register with Tes and you can read two free articles every month plus you'll have access to our range of award-winning newsletters.

Keep reading for just £1 per month

You've reached your limit of free articles this month. Subscribe for £1 per month for three months and get:

  • Unlimited access to all Tes magazine content
  • Exclusive subscriber-only stories
  • Award-winning email newsletters
Recent
Most read
Most shared