Let's not confuse better results with the raising of attainment
A strong theme in last autumn's Scottish Learning Festival was "improvement". Around that time, Michael Russell announced that he had appointed a group, including headteachers who had "walked the talk", to report to him on how to improve the attainment of Scottish pupils. Although they finished their work at Christmas, there has been no public report as yet, but I really hope they have widened their remit to look beyond "attainment".
In recent years, within the education world in Scotland, attainment has been code for "exam results". Many would argue that a focus on exams has led us to value what we can measure, rather than measure what we value. Which examinations allow us to measure improvements across all four capacities in the curriculum?
Ben Levin, in his keynote address to the Learning Festival, based his convincing account of significant improvements in Ontario on the measurement of "student outcomes", which combined elements of attainment and wider achievement. Two key measures of progress which he quoted were literacy and numeracy grades at age 12 (can they read and write?) and rates of graduation from high school. Graduation in Ontario values a wide range of qualities and skills, as it requires a number of credits earned through a flexible mix of conventional classroom learning, experiential and work-placed learning and practical performance.
In Scotland we have used three broad indicators with a focus on academic performance: the Scottish Survey of Achievement (SSA); exam results; and the international comparisons of Pisa. But do these give enough valid, reliable information about the improvements we want?
The SSA, originally based around the 5-14 level statements, gives figures said to be reliable to within 2 per cent. Although 5-14 results were often used to judge standards in upper primary and lower secondary, they were largely unreliable (moderation was weak) and invalid, since the levels were originally based on best guesses of what it would be reasonable to expect children at particular stages to be able to do, not on reliable evidence of what Scottish pupils actually did. SSA provides more reliable comparisons to be made over years, across cohorts. The latest figures in reading (2009) and numeracy (2008) suggest no recent significant change in attainment levels at any stage from P3 to S2.
Exam results at S4 (typically 5+ at Level 3, 4 or 5 at school or national level) have been widely used as a proxy measure of progress in secondary education, but do these and similar figures tell us if children are learning better?
Some schools and authorities, under pressure to improve performance, developed a range of strategies to deliver more favourable figures - strategies such as curricular manipulation to maximise access to "easier" subjects, intensive individual support for those on 4+ after their prelims or even, so I am told, tricks such as promoting absentee students into an S5 curriculum to avoid them appearing in the S4 September census figures.
Across Scotland, teachers have become adept at "teaching to the test" - making sure children understand the top tips for a Credit award. Such actions may improve results, but they do not necessarily improve learning. They result from the "perverse incentives" created by this, or similar, arbitrary "proxy measures" of school effectiveness, as Cowie, Taylor and Croxford put it in their critical appraisal of "Tough, Intelligent Accountability in Scottish Secondary Schools and the role of Standard Tables and Charts (STACS)" (Scottish Educational Review, 39(1)).
The Pisa analysis offers a tantalising prospect of more reliable comparisons of performance relative to 65 other countries, over time. It is less open to manipulation or "teaching to the test", since it assesses competence in the application of knowledge, not just the mastery of examination techniques or particular facts.
Pisa's major focus in 2000 was on reading, in 2003 mathematics, in 2006 science and in 2009 reading again. Behind the figures lie a mass of (usually unread) technical statistical manipulations to increase confidence in the final figures. Although figures are always approximate, the headline figures in executive summaries are often widely used by politicians, journalists and others as if they were exact, claiming relative success or failure for Scottish education. What they actually show is that after 20002003, performance declined a little in reading and maths, and since 2006 performance has flatlined in all three areas.
Overall, we have hovered above the OECD average, sometimes in the upper areas of the "average" band, sometimes just above it. In 2009, only Finland, Hong Kong and Shanghai outperformed us significantly (more than 5 per cent better) in all three areas, with Singapore, Korea, Japan and Canada outperforming us by more than 5 per cent in two areas and the Netherlands, Australia and New Zealand outperforming us by less than 5 per cent in all three areas. A good performance among 65 countries, but one that can doubtless be improved.
Topping the Pisa league tables may not, however, be all that we want out of our education system. Curriculum for Excellence tells us that the four capacities describe the broad outcomes we seek. They incorporate important learning skills, but also desirable social behaviours. Are we giving such outcomes equal value? How do we recognise and reward these qualities? It seems that not only could we do better, but we could measure better what we do.
Daniel Murphy, Former headteacher. Daniel Murphy is a tutor at the University of Edinburgh. Next month, he proposes improvements to the present measurement systems, to allow better judgements about whether Scottish education is improving.