On your marks

18th October 1996, 1:00am

Share

On your marks

https://www.tes.com/magazine/archive/your-marks-1
When Roger Bannister ran the first sub-four-minute mile in 1954 it was seen as a very special achievement. Nowadays it is almost commonplace. Yet the achievement in some sense is exactly the same. There has been no change to the length of a minute or to the length of a mile. But training, diet, tracks and equipment have all improved. So what has happened to running standards? Have they risen or fallen?

Take a more complex example. The retail price index is a measure of inflation. The weightings of the commodities which make up the index are determined by typical consumer spending patterns. But as patterns change, so the index changes to reflect them. So can it still be measuring the same thing? And would it make sense if it were?

Like economic indicators, educational indicators are heavily context specific. The School Curriculum and Assessment Authority and the Office for Standards in Education are due to report this autumn on a study of exam standards over 20 years. But what can such a study tell us, given how much has changed in 20 years?

The period has seen profound demographic, social, political, cultural and technological change. In education in 1976, Shirley Williams was Secretary of State, Prime Minister James Callaghan made his Ruskin College speech about the failings of the education system, the Assessment of Performance Unit was set up, and debate about progressive versus traditional teaching methods was fuelled by Neville Bennett and the William Tyndale inquiry.

Since then the structure and management of education have changed out of all recognition, with the growth of sixth-form colleges and FE colleges, the spread of comprehensives and the introduction of grant-maintained status, local management and increased powers for governors. We’ve had about 20 education Acts and a host of reports - Waddell, Warnock, Cockcroft, Swann, Kingman, Elton, the Task Group on Assessment and Testing, Higginson and two Dearing reports, to name but a few.

Curriculum change has moved apace, with “new” maths, “new” history, a greater emphasis on practical and oral skills, new subjects such as business studies, media studies and computer studies, the move in Years 10 and 11 from physics, chemistry and biology to double science, and fundamental developments in technology. GCSE was not just a change to the 16-plus exam system, but a major source of curriculum development, while the national curriculum, the very idea of which would have been almost unthinkable in 1976, has been a stimulus for rapid change.

The exam system has developed accordingly. In 1988 O-level, designed for the top 20 per cent of the age group, and CSE, designed for the next 40 per cent, were replaced by GCSE, which now caters for more than 90 per cent of the cohort. Students in Year 10 have just started courses to prepare them for the third version of GCSE in 10 years. Syllabuses designed to meet the A-level common cores in the 1980s were revised for first exam in 1996 and already we are preparing for the new A-levels in the year 2000. The proportion of 18-year-olds taking A-levels has doubled from 15 per cent to 30, while the number of students taking general national vocational qualifications is expanding rapidly.

Coursework has been introduced in most subjects at both GCSE and A-level, as has the assessment of practical and oral skills, modular A-levels and a recognition of the need to cater for a wider ability range. Multiple choice has virtually disappeared. Syllabuses have become more detailed and informative, exam boards publish exemplar material, mark schemes and detailed reports and run in-service courses for teachers. Public examining is subject to codes of practice and is regulated.

Twenty years on we’re not assessing the same thing, we’re not assessing it by the same methods and even if we were we would be doing so in quite a different context, which of itself would change the very thing we were assessing. Even the most sophisticated methods cannot demonstrate conclusively what has happened to standards over such a long period, as Dr Nick Tate acknowledged in The TES on September 6, even it we could define something as complex as standards. We are trying to compare moving targets against a moving background.

So, should we give up trying to evaluate what has happened to standards over time? Paradoxically, the answer must be “No”.

There are at least four reasons for continuing to explore this tantalising question. First, however imperfect our methodologies, they can monitor change, albeit crudely, and alert us to issues which merit further investigation. As long as we remember that any independent measure against which we compare exam results is telling us only part of the story, and is itself subject to the effects of the passage of time, we can usefully monitor trends.

Second, even crude methodologies enable us to detect relative shifts. For example, the percentage of candidates gaining grades A-C (since 1994 A*-C) at GCSE has been rising since the first exams in 1988. But it has risen faster for girls than for boys. This does not necessarily tell us much about the absolute standard achieved, but it does highlight a differential change worthy of further exploration.

Third, comparisons over time can give us valuable insights into what is happening to the curriculum and its assessment. The question we should be asking in education is not whether standards are rising or falling, but whether they are appropriate. Instead of the current claims and counter claims about the standard of A-level maths, for example, we might get a more reasoned debate if we began by asking what A-level maths is for, who it’s for and what it would be appropriate to expect of candidates who hold such a qualification in the 1990s. Knowing about what happened in the past is a vital contribution to such a debate, though it cannot tell us if we are setting the right standards for today.

Fourth, it is not unreasonable to make comparisons over relatively short periods of time. The codes of practice which govern examining take this approach, requiring those who award grades to make judgmental and statistical comparisons with recent years. The A-level boards recently compared the grading of candidates’ work in A-level physics and maths over four and five-year periods. It is common for researchers to use similar time spans and for test developers to revise and re-standardise their tests every 10 years or so.

But 20 years? The SCAA and OFSTED report should make interesting reading, especially between the lines, but it’s no use expecting it to tell us conclusively whether standards have risen or fallen.

Dr Helen Patrick is a research officer at the University of Cambridge Local Examinations Syndicate, but writes here in a personal capacity. A longer version of this article was presented at the British Educational Research Association conference in September 1996.

Want to keep reading for free?

Register with Tes and you can read two free articles every month plus you'll have access to our range of award-winning newsletters.

Keep reading for just £1 per month

You've reached your limit of free articles this month. Subscribe for £1 per month for three months and get:

  • Unlimited access to all Tes magazine content
  • Exclusive subscriber-only stories
  • Award-winning email newsletters
Recent
Most read
Most shared