I love this time of year. It’s when we emerge from our winter torpor. From January onwards, we’ve been looking for signs of spring’s arrival, assessing its progress via a series of milestones: bulbs flowering, birds nesting, that first evening drink in the garden.
If we were more methodical, we might record the dates on which we first noticed the appearance of key indicators.
Imagine that you have been tasked with recording the progress of spring and that your performance is determined by the evidence you collect. Noting occasional dates is not enough, so you set about counting flowers once a week.
Some weeks, nothing happens. But a week in which “no change” is recorded is frowned upon and leads to the utterly unintuitive decision to increase the schedule to daily recording. You are now reduced to getting down on your hands and knees with a ruler to measure plant growth in millimetres. The numbers must go up.
You can see the connection with education, I am sure. One of the main mistakes made with levels in England’s primary schools was adopting sublevels and points to measure progress over shorter and shorter periods.
This was not done to support teaching and learning, but in response to ever-growing pressures to show improvement. Levels became a device for accountability and performance management.
Consequently the data ceased to reflect pupils’ learning and instead showed what we required it to show: continual, upward, linear progress.
Ticking the boxes
Teachers, under pressure to keep the numbers going up, ticked a few more boxes and made sure that the latest value was higher than the last one. The metrics that we used were not rooted in what could be measured and did not encapsulate the complexities of learning; they simply reflected how many times a year schools collected assessment data.
If we had 12 so-called data drops per year, then we’d conveniently have 12 increments on our scale: an increasing number of arbitrary values to “prove” progress over ever-tighter timescales. But the irony is that the more complex these systems become, the less they actually tell us about pupils’ learning.
The final report from the Commission on Assessment Without Levels notes that “many systems require summative tracking data to be entered every few weeks” and warns that “recording summative data more frequently than three times a year is not likely to provide useful information.
Over-frequent summative testing and recording is also likely to take time away from formative assessments that offer the potential not just to measure pupils’ learning, but to increase it.”
Sadly, this vital piece of advice is being ignored by some who scrutinise school performance, and their demands should not go unchallenged. Are we attempting to measure the immeasurable – and is this obsession actually a danger to children’s learning? What our systems and approaches need is a spring clean.
James Pembroke founded Sig+, an independent school data consultancy, after 10 years working with the Learning and Skills Council and local authorities
This is an article from the 8 April edition of TES. This week's TES magazine is available in all good newsagents. To download the digital edition, Android users can click here and iOS users can click here
Want to keep up with the latest education news and opinion? Follow TES on Twitter and like TES on Facebook