Asked to name the biggest single reason for their overburdened work lives by the government last year, teachers gave a simple answer: data. When 53 per cent of 44,000 teachers tell you that data is a problem, Houston we have a problem.
I’ve lost count of the number of times teachers have written, blogged or tweeted in desperation that our obsession with data is crippling education, not enhancing it. Data, they complain, has turned teachers into data managers and schools into audit factories.
The standard response of many of my colleagues in the assessment sector is to suggest that teachers would see the light if only they knew how to handle data properly, or if they tested more frequently, or if they used a different type of metric. As a strategy I think patronisation has limitations but as analysis I think it is dangerously mistaken.
We would do better to accept what is driving this disillusionment: too much data is about control not improvement, too much of it is misused and far too much of it is pointless.
Bad data versus good data
The first and most obvious example of bad data is what has been labelled "national accounting" – data that is used to grade systems and schools rather than improve the student. The intent of this top-down assessment is understandable. Politicians want to know how the system as a whole is performing and to contrast good practice with bad. Unfortunately, top-down assessment is usually based on tests that were designed to assess the student not the school, and certainly not an entire education system.
As a consequence, an awful lot of the benefits of assessment are lost. The desire to understand an individual student’s strengths and weaknesses is eclipsed by an exercise to pass or fail a school. Effective, tailored intervention is obscured by the need to meet institutional targets. The judgment of teachers is sidelined in favour of blunt, undifferentiated monitoring tools designed to control rather than improve. The temptation to "game" the system to meet external targets becomes overwhelming while the specific needs of individual children are relegated.
This is a huge mistake. If politicians want top-down assessments they shouldn’t try to co-opt assessments designed to assess individual need. And if they aim to gain widespread support for them, then they must ensure that teachers are allowed to use the data for the benefit of their pupils.
Unfortunately, politicians aren’t the only ones who unwittingly promote bad data. Schools and teachers are guilty of doing the same. A common mistake is to assume that more data is good when, as so often in life, less is more. Presenting colleagues or governors with a 2in-thick file of data will be guaranteed to obscure rather than illuminate. Repeating the same assessments more than a couple of times a year won’t add any extra insight but it will add to teacher workload. Nobody wins from a surfeit of data. It risks swamping the meaningful with the meaningless.
A third type of bad data is the short-lived variety. Data that lingers no longer than a mayfly, that doesn’t build on previous results and that fails to indicate what should come next. The best data is cumulative; it helps build profiles of children over several years and suggests the most appropriate interventions. Children’s lives are not a series of data events. Sporadic, unrelated assessments ultimately end up treating them as if they were. We wouldn’t rely on Snapchat for a video diary. Snap assessments have no more legitimacy.
Closely related is the fourth type of bad data: data that isn’t really data at all because it’s subjective or isn’t related to any other information. Using a tracking system, for instance, isn’t data. It can make for very attractive dashboards and look superficially scientific, but it is in essence an accumulation of subjective observations.
How many endless hours do teachers spend every year inputting pupil observations in the mistaken belief that this is "data"? It isn’t. It is the antithesis of good data and it is particularly bad for those who have already been categorised – the disadvantaged and the able – because with the best will in the world human assumptions all too frequently influence observations. An awful lot of ad hoc, internal assessments fall into this category too because they aren’t robust, replicable or rigorous.
The final kind of bad data is incredibly common because it is so easy to get wrong: ascribing large meanings to small variations. As every parent and teacher knows, children do not progress in a linear fashion. They take wrong turns, U-turns, short cuts, stall and occasionally go into reverse. Unfortunately, bad data ignores this inconvenient fact of life and magnifies every reversal, every stutter and sidestep.
This misdiagnosis isn’t confined to individuals; I’ve known confederations make huge assumptions about entire schools based on one set of data unrelated to long-term trends or any other measures. No doctor would put a patient on statins after one high cholesterol test. And no teacher or school leader should ever leap to conclusions based on a single result.
Given the sheer amount of bad data around, should we be surprised that 53 per cent of teachers cite data as their biggest burden? Should we wonder that so many have become so exasperated that they would prefer to have nothing to do with assessment? Yet damning all data because so much of it is bad is self-defeating. Schools without data are driving blind. What they need is good data.
Good or smart data provides context. It helps teachers assess students over time and in relation to each other. It is fair – it only assesses what it was designed to assess – and it is appropriate. If a teacher thinks an assessment isn’t suitable for a student they should find an assessment that is.
Smart data is also infrequent – most children only need to be formally assessed a few times a year. It is transparent, honest in its intent and freely shared. And it is only as good as the skill teachers have to use it. If they don’t have that skill then they should be helped to acquire it or the data should be interpreted by people who do.
Ultimately, smart data leads to something – it isn’t an end in itself but it provides enough information for a teacher to make a decision about a student. It liberates, it does not control. The right data can make a real difference and simple data can make very smart schools.
The irony is that we experts have not always been that smart about data. Good data, like good teaching, starts and ends with the student. And we really shouldn’t have to score their every move to figure that out.
Greg Watson is chief executive of GL Assessment
Want to keep up with the latest education news and opinion? Follow TES on Twitter and like TES on Facebook