Measures of achievement need to be backed up with some gauge of pupils' raw ability. Phil Revell reports on how to provide a rounded picture
You do not need a crystal ball to foresee that Office for Standards in Education inspections will increasingly examine a school's capacity to inspect itself.
Schools will be expected to demonstrate pupils'real progress with the spotlight on value-added measures of attainment rather than raw scores. This will require reliable benchmarking data, yet key stage tests will never give the objective data that schools need.
"Key stage tests are achievement tests," says Carol Fitz-Gibbon, professor of education and director of the curriculum, education and management centre at Durham University.
"We know the teacher has an effect. Years ago it became apparent that we needed an aptitude test. You need tests for both prior achievement and aptitude, because the answers can be different."
Professor Fitz-Gibbon argues that achievement tests focus on content; how much is remembered, how much is understood, whereas aptitude tests, if well-designed, ought to inform teachers about potential.
Professor Fitz-Gibbon began working in this field in 1983 after a request from a local school: "The maths department were getting a lot of Ds at A-level and wanted to know why. I began to collect data."
The research spread out, first to 12 local schools, then to schools through the Durham and Tyneside LEAs. The result was ALIS, an A-level information system that sixth forms could use as a predictor of A-level potential. ALIS has been followed by YELLIS, the Year 11 information system, by MidYis for key stage 3 and PIPS for primary schools.
"We're looking to find out how difficult that group of children is going to be to teach to the next hurdle," says Professor Fitz-Gibbon. "The aim of the testing is to provide a baseline."
Aptitude tests like YELLIS and MidYIS are similar to intelligence tests. They attempt to reveal potential by looking at students' responses to questions examining verbal, non-verbal and quantitative reasoning.
More than a thousand schools use the tests nationally. Charlton school in Shropshire, which has used the tests for six years, is a not so typical example.
"Year 7 and Year 10 do the tests in the first few weeks of the term," says deputy head Gwen Kelsey. "Results come back in October and staff are pestering for them, they want them to inform the work they are doing."
The school uses the tests to aid classroom setting. "They aren't the total factor," says Ms Kelsey. "Departments still have their own professional judgment about where to put children, but as a senior management team we look at the bands and we would question an A-band student being in a low set. We would ask the department to justify that."
Controversially, the school also uses the test scores with individual pupils, something the Durham researchers have reservations about. "You can sit down with a child and show them where they are," says head Kay Cheshire. "And it's not us flannelling them. We can say, 'there's no escape here. You're intelligent, this piece of paper says you can do it.' That has done a whole lot for the self-esteem of a number of children, especially written-off bad boys."
The school's long history of gathering baseline data has paid off in other ways. "Ofsted identified one area of the curriculum as underperforming against national targets," recalls Gwen Kelsey. But there was a school policy of directing low-achieving pupils towards that subject area.
"Those children were all in a particular YELLIS band," she says. The department's grades did in fact represent considerable value-added, and eventually the Ofsted team accepted that.
In Chichester, the St Philip Howard catholic high school is also a data-rich environment. It uses the Cognitive Ability Test developed by publishers NFER-Nelson. As in Shropshire, teachers share the CAT scores with older students, something that others have shied away from.
Deputy head John McGuinness says: "In many schools it's not until the mock GCSEs that students receive objective information about standards. They say: 'This is OK for the brighter ones, but aren't you undermining kids with lower scores?'" The school handles the issue carefully. Test scores are shared at the beginning of Year 10; there are support materials for parents and an information evening. One side-effect has been that parents' evenings are more focused. Parents now have enough information to initiate genuine discussion.
Both schools argue that aptitude tests give them information that national curriculum tests simply don't deliver.
"You can't compare year groups with key stage tests because the tests differ each year," says Mr McGuinness. "How many parents understand the levels and the expected progress between levels? CATs allow us to set realistic targets that parents can relate to."
In Shropshire the verdict is even clearer. "National tests are just used to looking at national standards, without taking into account children's potential," says Gwen Kelsey. "If they weren't a legal requirement I don't think we would be paying a great deal of attention to national tests at all."