Academics say the best self-evaluation is for the school's eyes only. Anat Arkin reports
Long before the Office for Standards in Education, league tables and the naming and shaming of "failing" schools were twinkles in the ministerial eye, researchers at Durham university were helping schools monitor their own effectiveness.
The university's curriculum, evaluation and management centre (CEM) has provided schools with value-added information for more than 20 years. Peter Tymms, the centre's director, stresses that this information is primarily for internal consumption to help schools identify their problems in a climate free from fear.
"If you've got information that's for other people, for accountability purposes, then your mindset is to hide problems. But if you are part of a professional monitoring system, the motivation is there to find the problems and sort them out," he says.
Professor Tymms gives a guarded welcome to Ofsted's new emphasis on self-evaluation. One of the roles of inspections should be to check that schools have their own quality-assurance processes in place and are using them effectively, he says. But if inspectors were to use the value-added data that came out of those processes to hold schools to account, that would be more problematic. "With a harsh accountability system, with naming and shaming and league tables, the data are always suspect. So a heavy involvement by Ofsted in high-quality self-evaluation could undermine it," says Professor Tymms.
What came to be known as the CEM centre began life in 1982, when a governor of a successful school in Newcastle asked assessment expert Carol Fitz-Gibbon what she made of some disappointing A-level maths grades.
Following that discussion Professor Fitz-Gibbon set up a project that looked at students' progress and attitudes to the way they had been taught, and fed that information back to their schools.
Originally involving just 12 schools, this project evolved into the "advanced level information system" (Alis) which now covers a third of all A-level entries. It also provided the model for other CEM centre value-added monitoring systems for other age groups: Year 11 (Yellis), middle years (MidYIS), and performance indicators in primary schools (Pips).
Central to all these systems is the idea that school leaders and teachers should play an active part in analysing and interpreting the value-added feedback. So if, for example, a child is having trouble with reading, the data will pick that up but CEM centre staff will not attempt to explain the reasons or tell teachers what to do about it. "There are lots of reasons for these things and we think that people on the ground are best placed to understand what's going on," says Peter Tymms.
While not telling schools what to do, the CEM centre does provide guidance on how its data can be used. David Elsom, in-service training co-ordinator, says value-added information enables school managers to make fairer judgments about the performance of individual teachers or departments. It can help them appreciate, for example, that not all subjects attract students of similar ability, especially at A-level. It can also show school managers that the combination of subjects that students take sometimes influences the apparent success of particular departments.
But Mr Elsom points out that feedback from the centre's monitoring systems is for everyone in a school - not just its managers. "What is important is that it is not described as a useful management tool because that is the kiss of death as far as teachers are concerned," he says.
It often takes time to get staff to look at their own teaching in the light of the data. Penglais secondary school in Aberystwyth started working with the CEM centre four years ago, and assistant head Peter Hendry still needs to remind staff that the data are not intended to "beat people over the heads with" but to help them look again at what they are doing and if necessary tweak it or change direction.
The school gives data to individual departments and lets them make their own judgements, though Mr Hendry, who is responsible for school self-evaluation, will provide help when asked. "Once they have understood the data and taken ownership of it, they can be quite critical about what's going on and come up with very interesting suggestions for improvement," he says.
At Durham Johnston comprehensive, which has been working with the CEM centre since the late 1980s, the finer points of Alis, Yellis and MidYIS now form part of the common currency of staff discussion, according to head Richard Bloodworth. Like Penglais, the school gives subject specific feedback to individual departments. It also looks at data on a whole-school basis to compare groups and consider their potential for success. "That's very helpful with target-setting, which can feel like a shot in the dark," says Mr Bloodworth.
With the MidYIS system, schools receive both an overall test score for each pupil and information on their mathematical, verbal ability and non-verbal reasoning skills. This can throw up big discrepancies.
"You'll see a youngster who is quite good numerically and quite good at non-verbal reasoning. But their verbal skills will be way, way down and that suddenly explains a whole lot of patterns of behaviour and can be very valuable," Mr Bloodworth says.
At Intake primary school in Doncaster, which helped pilot the Pips project, staff find data on children's potential especially helpful. Head Liz Paver points out that by identifying not only slow learners but also potential high-fliers, the system enables staff to adjust the curriculum to meet the needs of both groups.
Mrs Paver believes that while there will always be a need for external assessment and inspection, many schools are now in a good position to evaluate themselves. Arguing that much of the money currently spent on Ofsted inspections should go into monitoring schools' own self-evaluation and funding follow-up development work, she says: "I think schools would really move on if that happened."