Badly-behaved pupils are often given worse marks than they deserve, an influential report concluded today.
The finding is important as it coincides with official moves to increase teacher assessment in schools. The analysis of 20 years of research said there was strong evidence that unruly boys in particular lost out when given grades.
Teachers could also be biased by other factors, said Professor Wynne Harlen who wrote the report for the Assessment and Learning Research Synthesis Group. There was some evidence that, when asked to grade pupils on individual tasks, teachers' knowledge of the youngsters' overall abilities influenced their judgment.
And children with special educational needs sometimes fared worse when assessed by their teacher than they did in formal tests, said Professor Harlen.
A government task force is recommending a big expansion in teacher assessment of 14 to 19-year-olds over the next 10 years.
However, Professor Harlen's message is not entirely gloomy. Provided teachers received proper training, and criteria for judging pupils were clearly defined, problems of potential bias were not insurmountable, she said.
Her analysis was based on an evaluation of 30 of the most highly-regarded studies on teacher assessment worldwide since 1985.
Academics in 1993 in Cleveland, Ohio, and New York had compared teachers'
gradings of 794 youngsters in kindergarten, grade one and grade two, with their assessment of children's behaviour.
Behaviour consistently coloured teachers' judgments of pupils' academic ability. Because boys tended to misbehave more, they were more likely to lose out, said the American study. This was, they said, the result of subconscious judgments by teachers, not intentional decisions to penalise troublesome youngsters.
Another study of English 11-year-olds in 1996-98, found that teachers'
judgments of special needs pupils were often well below their test results.
Six more studies found evidence of teachers being biased by students' gender and their knowledge of youngsters' overall abilities.
Teachers' assessments were more vulnerable to being skewed by such "non-relevant factors" than test results, said Professor Harlen.
The review was commissioned after an earlier study by the same group found evidence that testing demotivated learners. It offered no firm conclusions as to whether teacher assessment was a more reliable measure of pupils'
achievement than testing, or whether teachers' judgments tended to be higher or lower than test grades.
Teacher assessments cannot simply be compared with test results to determine their accuracy, as the tests themselves might not be reliable guides to performance, said the report.
Professor Harlen said that teacher assessment could be made reliable where clear, detailed criteria, against which staff would assess pupils, were agreed with teachers.
School assessment procedures and teacher training could also be used to protect against potential biases in teacher assessment.
Policy-makers should also acknowledge the shortcomings of exams and national tests when deciding whether to back teacher assessment.
Mike Tomlinson's review of 14-19 education is proposing that teacher assessment forms a major part of a new diploma qualification recommended to replace A-levels, GCSEs and vocational exams in English schools by 2014.
In a report last month, the Tomlinson review said that research evidence suggested that teacher assessment worked well when it was supported by detailed marking criteria, and guidance as to how these rules were applied.
Professor Harlen's analysis supports this view. It also argues that training for assessment is most effective when teachers are involved in the setting of criteria.
The report is published by the Evidence for Policy and Practice Information and Co-ordination Centre, http:www.eppi.ioe.ac.uk