I SUPPOSE this question makes a change from the annual debate about whether A-level standards are falling, but the prior questions have to be: How could we know whether there are any soft options? What would count as evidence that there are?
Obviously, the question is prompted at this time of year by the publication of A-level results, so let's start there. Overall, 95.4 per cent of candidates achieved at least a grade E this year.
Taking a simplistic approach, we might argue that the subjects with higher pass rates were the easier options and those with lower pass rates were tougher. So that would mean that classical subjects (including classical Latin and Greek), with a pass rate of 98.6 per cent in 2003 are easy options while that well-known easy option, general studies, is, in fact, the hardest subject of all, with only a 90 per cent pass rate.
English A-level, with 98.4 per cent passing, looks "soft", while some of the usual "soft" suspects (psychology at 94 per cent) appear to be "hard".
Of course, this is nonsense. Latin and Greek have small numbers of candidates, most of whom are well taught in small groups in selective schools. Conversely, and ironically, it may be that general studies and psychology, because they are "known" to be "easy", attract candidates with less ability andor those who think they don't have to put in the hard work that is needed.
So perhaps we could look at the proportion of A grades awarded in each subject. Of all candidates, 21.6 per cent achieved an A this year, 38.9 per cent of maths candidates got As, whereas two of the lowest A-grade rates were in media, film and TV studies (12.4 per cent) and sportsPE studies (11.6 per cent). Not much evidence of soft options there.
So how else might comparisons be made? We could look at the specifications in different subjects. But what valid criteria could be used to compare, say, Welsh (which 98.8 per cent of candidates pass) with, say, computing (pass rate 91.2 per cent), in terms of either the level or the amount of knowledge, understanding and skills needed for each subject?
Or we might make a study of the A-level performance descriptions, recently published by the Qualifications and Curriculum Authority, which aim to describe levels of attainment at the AB boundary and EU boundaries for AS and A2 in 51 subjects. But, again, we would need reliable criteria.
Could it be that the mark schemes hold the secret, or the marked scripts of candidates who achieve each grade? Perhaps there is scope for research based on observation of the awarding meetings that are held for every subject at every awarding body.
At these meetings, examiners make decisions about which scripts should achieve the crucial A and E grades, driven primarily by their professional judgment of the quality of scripts and only secondarily by the statistical outcomes.
Even here, it would be difficult to devise criteria by which to compare the deliberations of, say, biology examiners (pass rate 92.6 per cent) with expressive arts examiners (98.6 per cent). There might be more mileage in comparing related subjects such as sociology and psychology, or chemistry and physics.
On the basis of the evidence we have, and taking into account the complexity of the issue, the answer to the question must be: No, it's much more complicated than that.
Patrick McNeill is an education writer and consultant and an experienced chair of examiners