Martin Titchmarsh advises on fairer scoring of exam performance.
It is now commonly accepted that schools can, and do, make a difference to the progress of their pupils. It is therefore essential for a governing body to safeguard the interests of pupils by monitoring the school's examination performance.
Governors may be tempted to react to their school's position in the examination league table like directors of a football club in the relegation zone. And many headteachers anxiously await their vote of confidence from the board. The best advice to governors is to proceed with caution.
First, what not to do. It is tempting to compare one year's results with the previous year's, and to make a quick judgment: up three points, or down six. Year-on-year comparisons can be very misleading. Year groups differ in their general quality - all teachers know the phenomenon of a "good year" or a "bad year". Also, variations in the balance of boys and girls within a given year group can make a great difference to overall results.
Comparisons with national averages, or a neighbouring school, can appear attractive but do not take into account different starting positions. If governors insist on comparing their school's results with national statistics they should at least do this in a more sophisticated way than by comparing it with the national average.
It is still too early to have detailed exam statistics for 1996, but in 1995 the average A-level points score (counting A=10, B=8, and so on) for all schools was 17.5. But in maintained schools it was lower at about 14 points. Boys and girls in these schools achieved a similar average, but higher proportions of boys scored either very badly or very well. At GCSE the situation was more complex. In all schools 43.5 per cent achieved five or more A, B or C grades; in maintained schools the figure was 41.1 per cent.
These statistics hide further big differences between the results of different types of maintained schools and between boys' and girls' performance. In secondary modern schools 25.9 per cent of pupils achieved the five or more ABC benchmark and in comprehensives it was achieved by 39.7 per cent. In boys' comprehensives, however, only 35.9 per cent achieved these results, compared with 46.1 per cent in all-girls' comprehensives.
It is also tempting for governors to make quick comparisons between subjects but these too can be misleading. Some subjects are more "difficult" to pass than others and the performance of boys and girls varies from subject to subject. In maintained schools 52.4 per cent of all pupils passed English language at grades A, B or C, but only 40.6 per cent achieved the same in mathematics. Boys and girls performed equally well in mathematics, but while half of all girls achieved ABC grades in French, only a third of boys did so. Four out of 10 boys passed English language at an ABC grade and six out of 10 girls.
It can be seen then, that simple comparisons don't get us very far. Governors need to be aware that there are a large number of factors affecting pupils' examination performance. Schools can influence some of these, such as attendance, but many others are outside, or at the margins of direct control. Age within a year group, a change of secondary school, background or family factors can all affect pupils' performance. Nevertheless, research now shows that pupils' prior attainment is the most significant factor in predicting their subsequent exam performance.
If governors wish to carry out a meaningful analysis they must take account of the school's intake, and then judge what their school has added to their pupils' knowledge, skills and understanding. This is the value-added approach.
Governors can ensure that their school carries out systematic value-added performance analysis by subscribing to one of the national systems. There are a number of user-friendly, and sophisticated, statistical packages available. The pioneering work of Professor Carol Fitz-Gibbon and the ALIS (A-Level Information System) team enables a school to analyse its A-level performance. The Department of Education and Employment has also produced easy-to-use graphs enabling A-level success to be analysed in the light of a student's GCSE performance.
Fitz-Gibbon's team has produced the YELLIS method to analyse GCSE results. Pupils take tests in either Year 10 or Year 11. A comprehensive package of information is then produced analys-ing the school's general examination performance and results in individual subjects. The National Foundation for Educational Research, too, has produced its QUASE system to help schools measure added-value. By this method, attainment at intake, using for example an NFER test score, can be used to predict GCSE performance at 16.
The value-added approach is fairer but even then only highly generalised conclusions can be drawn. The data is not definitive but provides a useful prompt to further detailed discussion. Governors can encourage the school to use exam analysis in order to: * evaluate the preceding year's examination performance; * set realistic examination targets for subject departments and for the school as a whole; * identify pupils who may be underachieving and set them realistic targets; * assess the effectiveness of provision for specific groups of pupils; * and provide information to pupils to choose appropriate post-16 courses.
Governors and headteachers should use value-added analysis to provide management information. They would be wise to avoid using statistics in the way the drunk uses a lamp post - for support not for illumination.
Martin Titchmarsh is headteacher of the Nobel School, Stevenage. Details of ALIS and YELLIS from 0191 222 6588 and QUASE from NFER on 01753 574123.