Skip to main content

'Baseline 2.0 relies solely on data – the business in charge of harvesting it must understand the stakes attached'

The DfE is yet to announce the winner of the £9.8 million contract for the collecting the data from the new baseline assessment – but, whoever it is, must be reliable, trustworthy and robust, writes one assistant head

News article image

In a week dominated by discussion about the misuse of data, is this the perfect week for the DfE to release details of Baseline 2.0?

The result of the £9.8 million competitive tender for the 'new' baseline assessment is due to be released imminently. RBA (Reception Baseline Assessment) is intended to establish pupils on-entry attainment in the first half term in Reception class and is set to become statutory in English schools in September 2020.

It might be more honestly marketed as 'new-ish' following the faltering introduction and then withdrawal of three previous baseline schemes intended to create robust national on-entry data for four and five-year-olds.

So, what is so new about 2.0? And shouldn’t we be commending the DfE for what we rarely see in policymaking and often argue for in classrooms: making mistakes, critically reflecting and then having another go? Both questions are reasonable and particularly prescient in a “post-catalytic” climate.

RBA is a significant departure – it has been widely criticised for ruling out observational based assessment methods, which are extensively used in early years settings, to build a picture of children’s competency. This is a bit like telling a professional footballer to play soccer but not letting them touch the ball with their feet.

Observation has a long and proud history in early years education and 70 per cent of schools opted for a scheme, the first time around, using these methods because they felt it pedagogically and ethically sound, as well as, more likely to result in valid data.

'Starting point'

Why then has observation been ruled out in 2.0? The answers might lie in the government's consultation document on assessment published last September. It is peppered with terms like "robust", "reliable" and "trusted", suggesting that the fuzzy observational stuff is alright to make decisions about what, when and how to teach young children but it doesn’t quite cut the mustard when it comes to the hard data-driven facts of school performance.

The DfE was crystal clear from the outset: the purpose of RBA is to create a baseline that forms "the starting point for the progress measure that will be used for school accountability". Here is the crux: all four and five-year olds will engage with tasks, as yet unspecified, with their teachers or TAs, to produce standardised data that will then be used, in seven years' time, to judge the performance of the school.

As practitioners, we have long had concerns about how this type of assessment affects young children and the reductionist way data has come to operates in schools. Does it really tell the complex stories about how children learn in our classrooms? Does it fully explain the value that is added by a great school to children, their families and the communities they serve? Parents are rightly and vocally beginning to ask important questions too, like: is this right for my child? And should the first six weeks of my child's (and their classmates') experience in Reception be occupied by this exercise?

At the weekend, John Snow wrote with great clarity about his gratitude for the innovations offered by the tech giants, but he also cautioned for the need to be alert to the use of personal data for nefarious ends. Technology, particularly film and video, has transformed how we record children’s progress in the early years. Done well, it has added richness and detail, doing justice to the complex learning journeys of individual children. There seems little evidence of that thinking here.

Who will be paid £9.8 million, to be put in sole charge of harvesting, for free, the attainment data of every four and five-year-old across England? We don’t know the answer to this question yet, but we had all better hope whatever edu-business wins the prize, that "robustness", "reliability" and "trust" aren’t just features of the data they produce.

Elizabeth Havard is an assistant headteacher in an inner London primary school and a former LEA advisor

Want to keep up with the latest education news and opinion? Follow Tes on Twitter and Instagram, and like Tes on Facebook

 

Log in or register for FREE to continue reading.

It only takes a moment and you'll get access to more news, plus courses, jobs and teaching resources tailored to you