United States. Cost-cutting software is checking students' papers for vocabulary, syntax and logical thought in key tests, reports Jon Marcus
Students who take the four-hour, high-pressure admission test for American graduate schools are having their papers marked by machines.
Since February, tests taken by 200,000 applicants to business schools - which include answers in essay form - have been graded by a computer program calledE-rater, a system that drastically cuts costs and is expected to be expanded to other types of educational tests at all levels.
Using technology that took three decades to develop, E-rater checks text for vocabulary and syntax, ostensibly measuring logical thought.
That works on an admission test for business school because "we're looking for the way you can organise your ideas and express them through your writing - we're not grading language skills," said Frederic McHale, a spokesman for the graduate management admission council.
Other such assessment software is coming into operation, hastened by the availability of so-called computational linguistics technology developed during the evolution of speech-recognition software. The project essay grader, for example, can be "taught" to recognise a good or bad essay from manually graded examples of each.
The more advanced intelligent essay assessor uses a sophisticated form of artificial intelligence to compare an essay answer with reference material. That program is already used in primary and secondary schools. Such technology allows standardised tests to include more essay questions, rather than multiple-choice ones, Mr McHale said.
The educational testing service (ETS), which developed and owns the E-rater technology, also oversees such other high-stakes examinations as the scholastic assessment test (SAT), graduate record exam (GRE), test of English as a foreign language and advanced placement test.
In all, ETS gives nine million tests each year, many of them candidates for electronic grading. Critics say that using computers to read essays will discourage the proper use of language. Unique writing styles and uncommon words confuse the mechanical graders.
"The big problem with automated grading is that any sign of originality - for example, the use of metaphor or the introduction of unusual explanatory examples - will be downgraded," said Andrew Fennberg, a professor of philosophy at San Diego State University who studies the issue. "This only works for very stereotyped questions and answers." Within a decade, ETS said, the SAT, given to four million high-school students last year, will go on computer, making electronic scoring of essays possible.
The ETS also announced last week that all GRE tests, which are multiple-choice, will in future be taken on computer and adapted to the level of the candidate. After asking some initial moderately difficult questions, the computer will assess the ability of the test-taker and pose questions closer to that level. The marks will be weighted to give higher scores to more difficult questions.