into last week's A-level results illustrates a point that should be obvious to teachers. It is dangerous to take exam statistics at face value, and it is perilous to read too much into the data they generate. Yet this is precisely what the press does every August.
The Government and the media want to use the figures to understand whether students are better-educated, or harder-working, than their predecessors. Reporters want the information to judge the Government's overall education performance.
These statistics fail lamentably on both counts. They cannot offer any categoric evidence to answer the questions because the GCSE and A-level system has changed dramatically over the years, making comparisons with previous years fraught with difficulty. For example, since 2001, students get much better information on their performance midway through their A- levels.
AS grades give them indications on which exams they are likely to do well in, meaning that they can reject their worst subject halfway through the course. Students dropping out like this are not counted in the final statistics. Nor, it has emerged, are those who enter their final exams but are not awarded a grade.
The modular structure of the modern exam also facilitates resits. This, government data have shown, is being used to improve final grades in a manner that was previously impossible.
At GCSE, recent changes, including the scrapping of the rule requiring students to study languages at key stage 4, might be expected to raise results.
Most fundamentally, the last decade or so has seen a transformation in the support exam boards offer to schools.
Attempting to compare results in this modern regime with those of, say, 15 or 20 years ago is as perilous as trying to conduct a scientific experiment in which more than one variable is changed.
It is tempting to say that whether exam standards have been maintained is irrelevant; the main thing is that pupils' work is graded accurately and changes are not too dramatic from year to year.
But this leaves aside the fact that these results are supposed to play a crucial part in holding ministers and schools to account for any movement in performance over time. Ministers could take a lot of heat out of this debate by using another system for judging overall education standards, such as tests involving only a small sample of pupils every year.
With the GCSE and A-level system constantly changing, it will never be clear whether raised results are the product of better teaching or of the exams getting better at extracting grades out of a given level of pupil ability.
For this reason, this annual debate is unlikely to go away. And students will continue to wonder why their hard work is the subject of so much scepticism.