For multiple-choice questions and short answer tests, the case for online marking is easily made: it is cheap and quick; and it deals more consistently and accurately with basic errors than a tired teacher trawling through papers late at night at their kitchen table.
But are essays and long answers a different matter? Research by the University of Cambridge Local Examinations Syndicate suggested that there were no disparities between the results of scripts marked on paper and those marked on screen. Examiners did, however, say they found it easier to get the overall sense of a long text by reading it off, rather than on, screen. Practice and familiarity might remedy that.
The next step - the use of computers to replace human examiners - raises other questions. While a computer may score 100 per cent for spotting missing apostrophes, critics wonder whether it will cope with less conventional but possibly effective forms of expression. How would it rate a budding James Joyce, a brilliant idea or an original argument?
That may not matter. Exams have never been about creativity or original thought. Teachers complain that the mark schemes for existing exams are drawn so tightly that examiners are simply ticking off points that a candidate has to make. Teaching to the computer may be much the same as teaching to the test.
In some respects, computers may outclass human examiners. They are free from the bias and inconsistencies that affect people. But it will take time to persuade us that the impersonal impartiality of a computer is a better way of deciding on an A or B grade than a teacher, who may have prejudices but also has years of experience of their subject and their students.