The fact that the number of pupils passing Higher and Standard grade examinations increased yet again this year prompted predictable reactions.
Peter Peacock, the Education Minister, repudiated any suggestion that standards are falling and praised the efforts of teachers and the hard work of pupils. Anton Colella, chief executive of the Scottish Qualifications Authority, insisted that the quality assurance mechanisms in place, informed by the expertise of subject specialists and external assessors, ensured the consistency of awards year on year.
By contrast, Lord James Douglas-Hamilton, the Scottish Conservative spokesman on education, expressed concern that the value of qualifications may become debased as more and more pupils gain better grades. He called for independent research "to determine the reasons for improved pass rates and whether the results are comparable over time and reflect consistency of standards".
Questions about standards are not confined to the school sector.
Universities, too, have been under scrutiny as the number of candidates receiving first and upper second class honours degrees has steadily increased. Critics detect a trend of grade inflation, linked to published tables which claim to assess the quality of universities on a number of criteria, including the level of degree awards.
In this debate, political ideology and bureaucratic self-interest play as powerful a role as evidence and analysis. So what might be involved in any attempt at an objective assessment of what is happening? At a relatively simple level, it is possible to analyse the content of examinations, in relation to the syllabus, over a number of years. This is easier in some subject areas than others.
In linear, sequential subjects such as mathematics, where understanding of complex functions depends on prior understanding of more basic processes, comparisons over time in respect of the degree of difficulty of papers could be undertaken with some hope of reaching definite conclusions.
In other areas, however, it would be less straightforward. In history, for example, is it possible to say that studying one period is inherently easier or more difficult than studying another period? Again, do the linguistic demands of a Shakespeare play necessarily mean that it is harder than a drama by Arthur Miller? Or does much depend on the complexity of the themes raised by different works, regardless of the date when they were written?
Another strategy would be to look at the criteria given to markers. How have these changed over time? What weighting has been attached to simple recall and factual knowledge compared to understanding, analysis, synthesis, interpretation and personal response? Have penalties been applied when grammatical and syntactical errors have occurred? How much has performance depended on extended writing as distinct from short answers?
These are all matters that should be amenable to systematic investigation.
That is not to say that conclusions would be straightforward. Higher order criteria are difficult to assess. Universities preparing for the 2008 research assessment exercise have been informed that their research output will be judged on the basis of three main criteria - rigour, significance and originality. All three terms are subject to varied interpretation.
There are also issues to do with the balance between coursework, practical assessments and formal examinations. Is a more accurate judgment of a pupil's abilities likely to occur when coursework is given significant weighting or does this open up the possibility of cheating through the use of private tutors and internet downloads?
Before pronouncing on whether exams are easier or pupils are smarter, commentators would be well advised to address the hard questions that might lead to the uncovering of real evidence.
Walter Humes is professor of education at Aberdeen University.