Last month, the House of Commons Education Select Committee heard evidence from Duncan Baldwin, of the Association of School and College Leaders, that technology, and, in particular, online when-ready testing, could spell the end of the summer exam season, and release the financial stranglehold of exam boards on schools.
This otherwise very welcome intervention needs to be seen in the context of the wider debate. We must remember the relevance of formal examinations in a world in which the regurgitation of facts cuts less and less ice in terms of future portfolio careers and lifelong learning. Employers are thinking of new ways of filtering applicants using online tests of aptitude and intelligence.
The prognosis involves less testing spread over a longer period, which, on the face of it, would be welcome from an educational standpoint. But it begs a more fundamental question.
It is assumed that assessment using AI would streamline the process, but if, in fact, it leads to greater dependence on banks of pre-tested items based on multiple-choice, true/false or fill-in-the-missing-item answers, then we have to ask: what will be lost?
Edtech: the risk of short-answer testing
Such testing might be helpful in selecting among job applicants, but it does not offer a satisfactory summation of a course of general education.
Truncated testing fits with the trend by which bite-sized knowledge is privileged over deeper, connected understanding. This trend has been hastened by the internet and social media, although concerns about a decline in attention spans, and the increasing unpopularity of reading whole books, go back a very long way.
Exams have tended to reflect, and in turn reinforce, these trends. For decades, exam boards have been working on making tests as easy as possible to mark clerically, if not mechanically, which has meant the erosion of professional judgement and the ability to distinguish between answers in terms of levels of sophistication.
To replace GCSE with assessments that can be marked more easily would be a step sideways. It is a mistake to assume that testing in whole areas of the curriculum outside English literature can be reduced to short-answer responses. All subjects should encourage full mastery and joined-up understanding. Yet for the foreseeable future, and even with adaptive assessments, mechanical testing means mechanical teaching.
It is sometimes argued that the writing of essays, still expected of many exam candidates at least at A level, is increasingly out of step with what is required even in "traditional" careers. Writing extended answers long-hand requires the archaism of penmanship, which risks consigning traditional exams to ridicule; but the construction of extended arguments in itself is a skill that retains currency.
Mercifully, our exam system doesn’t only test factual memorisation, or simple understanding of discrete concepts. As long as exams include room for longer responses, for professional judgement in marking, and for the allocation of levels of response, there is hope. There‘s plenty wrong with the present system, but most of that results not from the way exams are designed, but from the way they have been conscripted by politicians in search of ready-reckoner accountability measures.
Replacing our current exams with more mechanistic models risks solving one problem by creating another. Depressingly, that very procedure seems to be a historical hallmark of English exam reform.
Kevin Stannard is director of innovation and learning at the Girls' Day School Trust. He tweets @KevinStannard1