In my subject, English, a straight comparison of papers is of limited value with O-level English language targeted at about 20 per cent of the population, but GCSE at a good 60 per cent.
For years I prepared candidates for exams with two one-hour composition papers and one-and-a-half hour comprehensionsummarising, each marked to a tight mark scheme, with norm referencing for grades. Much of the marking, however, could be seen as negative, searching out what candidates didn't know or couldn't do.
The first major change was the abolition of the passfail concept, the magical A-C range acceptable to universities and professions, producing an often desperate crop of November retakes.
Instead, a range of achievement was recognised with A-E certificates. This also radically changed the numbers entered for O-level as opposed to CSE.
The next step was the merging of OCSE into GCSE, again radically altering target groups and necessitiating a major rethink of appropriate examination papers. The process still continues, with revised syllabuses for first examination in 1998 as national curriculum terminology, attitudes and practices continue to bite.
How in the face of such constant changes can we really talk with confidence of valid comparisons?
O-level English language was designed to test reading and writing skills, just as GCSE continues to do. Both exams test comprehension and summarising. But the old-style precis is very different from today's GCSE summaries.
Counting of words and points has been replaced by performance criteria, a positive rewarding of what there is rather than a negative exclusion.
Similarly, GCSE papers examine a far wider range of skills in both reading and writing with a variety of literary and non-literary passages, or source material of a visual or statistical kind, requiring considerable versatility and flexibility from candidates. Much greater use is now made of pre-released material, making comparison with unseen passages extremely difficult.
Perhaps exam boards should spend far more time explaining to the public how and why they now assess in the way they do, and less on attempting to defend a further change in "pass" rates.
When asked, as an examiner, whether standards have fallen, I tend to fall back on a cautious subjective response. In my view, modern candidates generally write in a more lively and interesting way (possibly the influence of television or travel) but they are rarely as accurate, a view confirmed by a recent research study from the University of Cambridge Local Examinations Syndicate.
Rather than constantly agonising over results, why not spend our time applauding what is being achieved and attempting to raise standards in those areas of a subject that we know can be, and which need to be improved?