Could you grade 16,000 essays in just 20 seconds?

OCR board may task computers with marking extended answers
15th May 2015, 1:00am

Share

Could you grade 16,000 essays in just 20 seconds?

https://www.tes.com/magazine/archive/could-you-grade-16000-essays-just-20-seconds

One of the UK’s largest exam boards is considering the introduction of automated marking for essay questions, TES can reveal.

Although the use of computers to assess multiple-choice papers is already widespread, OCR has been researching the option of using computer programs to mark longer answers. But the board has admitted that the technology is not yet reliable enough to use in high-stakes tests such as GCSEs and A-levels.

The news comes as the exam board administering national tests in Australia prepares to introduce computer marking for essay-based exams, despite opposition from teaching unions.

The Australian Curriculum, Assessment and Reporting Authority (Acara) said the move would substantially reduce the amount of time it took to grade papers, meaning that students could address areas for improvement immediately.

One option being considered involves teachers grading a sample of papers, which are then fed into a computer program so it can “learn” the marking criteria.

Stanley Rabinowitz, a general manager at Acara, said tests showed that computers marked as well as or even better than teachers. “Automarking is central to being able to return student results within much shorter time periods,” he said.

`Understandable caution’

Cambridge Assessment, which oversees OCR, has been carrying out its own research into computerised marking for essay-based exams. “This is something we have been looking at for a while, although it doesn’t mean it will happen in high-stakes examinations tomorrow,” a spokesman said.

He explained that the technology had advanced since one of OCR’s predecessor exam boards commissioned a study into the issue several years ago. Researchers looked at short free-text answers in a biology exam and found that computer marks matched those awarded by teachers 88 per cent of the time.

“The technology is much more capable now but it is not good enough yet for marking high-stakes essays,” the spokesman said. “There is also understandable caution from students and their families about using automated marking.”

Pearson, which owns the Edexcel exam board, uses computerised marking in some tests outside the UK, but a spokeswoman said there were no plans to introduce automatic marking for its A-level or GCSE qualifications. AQA also said that it had no plans to explore computerised marking for essay-based tests.

Meanwhile, school leaders said they were sceptical that computers could take the place of human markers in subjects such as English.

“I’m intrigued to see if it is successful but I’m struggling to understand how it is going to work,” said Robert Campbell, headteacher of Impington Village College in Cambridgeshire. “English is notoriously complex and I can’t see how a computer will pick up the elements that make for a great piece of writing.”

Brian Lightman, general secretary of the Association of School and College Leaders, said computers could play a role in marking tests where there was a right or wrong answer, but he was more cautious where assessment required any extended or explanatory writing.

“Marking that type of work requires a subjective judgement,” he said. “Computers can do so much and might be useful and reliable in lots of ways, but I would really need to be convinced that they are a replacement for a trained professional.”

Mr Lightman said formative tests were also an important tool in assessing pupil progress and that moving to computerised marking would diminish their value.

Automated marking of essays has been trialled in the US, where it has been found to have similar levels of accuracy to human marking. According to the Educational Testing Service, which administers more than 50 million tests a year, its e-Rater system can grade 16,000 essays in 20 seconds.

But there are concerns that automated marking systems are unable to recognise when “facts” are incorrect, and that they reward long essays and the use of long words while penalising short sentences and short paragraphs.

Can computers do complexity?

Robert Campbell, headteacher of Impington Village College in Cambridgeshire, says that computers could struggle to understand metaphors, idiosyncratic use of language and the constructions used by particularly talented writers.

“Think of the opening of Bleak House,” Mr Campbell says. ” `London. Michaelmas Term lately over, and the Lord Chancellor sitting in Lincoln’s Inn Hall. Implacable November weather.’

“I suspect those sentences would run foul of a computer, yet this is one of the great novels of the English language,” says Mr Campbell, who also teaches English and is an examiner.

“I can see how it could work in detecting the use of commas and full stops and how language is constructed and degrees of accuracy, but at the top end will it penalise flair and creativity?”

Want to keep reading for free?

Register with Tes and you can read two free articles every month plus you'll have access to our range of award-winning newsletters.

Keep reading for just £1 per month

You've reached your limit of free articles this month. Subscribe for £1 per month for three months and get:

  • Unlimited access to all Tes magazine content
  • Exclusive subscriber-only stories
  • Award-winning email newsletters
Recent
Most read
Most shared