How do we measure inclusion?

The SEND Green Paper promised inclusion dashboards based on local area data, says Rob Webster, but how can we measure inclusion in individual schools?
14th November 2023, 3:57pm
How do we measure inclusion?

Share

How do we measure inclusion?

https://www.tes.com/magazine/teaching-learning/specialist-sector/how-to-measure-inclusion-in-schools

How do you measure inclusion? It’s a tricky question, because common metrics, such as academic progress and exam scores, are, on their own, far too narrow to measure something so complex.

Contextual information offers fresh perspectives and brings balance to the data, but what is the best way to incorporate this? Between 2005 and 2011, Department for Education performance tables included a measure called contextual value added (CVA).

CVA attempted to account for the independent effect of a pupil’s background characteristics and circumstances on attainment by adjusting for factors beyond the school’s control, such as living in poverty or having special educational needs or disabilities (SEND)

It wasn’t perfect but, by recognising that different cohorts make different progress, CVA allowed for fairer comparisons between schools. Context is absolutely essential to how we measure inclusion because inclusion is an intrinsically relative concept.

If there were such a thing as a “school inclusion score”, your understanding of it might be incomplete without knowledge of how it compares with the scores of neighbouring schools. Do they, for instance, admit their fair share of pupils in the local community who have SEND?

Essentially, what you need alongside your “within school” measure is a “between schools” measure.

How to measure inclusion

There seems to be some acceptance of this in the DfE’s plans to publish local and national inclusion dashboards. The 2022 SEND Green Paper pledged to provide contextual information that would “make it easier to recognise schools and colleges that are doing well for children with SEND”.

The proposal later set out in the SEND and Alternative Provision Improvement Plan involved publishing school-level data on inclusiveness. But the DfE quietly dropped this part shortly after the plan appeared in March 2023.

The dashboards, the DfE said, would only show metrics based on the “local area”.

This begs the question: on what is the local measure based if not data collected from individual settings?


Read more:


The utility of the local metric will depend on how many schools comprise a “local area”. Too many, and the measure loses meaning. Assuming the DfE arrives at a workable range, the dashboards will provide a contextual measure of inclusion across local schools, but seemingly nothing that tells us about the inclusiveness of any one school.

In comparison, publishing CVA was useful in this regard, as schools in disadvantaged areas could demonstrate to parents and others how well they performed, despite the impact of disadvantage.

However, the DfE took a different view and, in 2011, dropped CVA. It claimed that “the measure is difficult for the public to understand”, and that “it is morally wrong to have an attainment measure which entrenches low aspirations for children because of their background”.

Quite apart from there being little evidence that CVA led to lowered expectations for disadvantaged pupils, critics say the replacement measure, Progress 8, doesn’t reflect the achievements of schools in deprived areas. The DfE, though, maintains that it’s fair and helps parents choose schools.

How useful will inclusion dashboards be?

The decision to renege on publishing school-level data on inclusion is a blow to those who’ve been calling for the accountability system to recognise and reward inclusive schools, and incentivise those that are less inclusive to do better.

It’s hard to see how the dashboards will deliver on the improvement plan’s aim of catalysing “behavioural and cultural change”, as the local area metric provides cover for schools less inclined to be inclusive.

What’s more, it’s unclear if or how the dashboards will capture the subjective aspects of inclusion. Inclusion describes not just the objective characteristics of a school and the pupils on its roll, but also an individual’s experience of life and learning in that setting.

A sense of belonging, for example - the feeling of being accepted, valued, safe and supported as an individual within a school - is an essential ingredient of inclusion.

The technical challenge for the dashboard designers is that these things are not easily reduced into a compact score. Yet, encapsulating the subjective elements of inclusion is necessary to deliver the ambition of the improvement plan.

The DfE had a moral objection to CVA, but rather than making it easier to understand and challenging where it was being used questionably, it ditched a measure that recognised the reality of, and the relationship between, schooling and the social ecosystem in which it sits.

This isn’t to say that a similar fate awaits the inclusion dashboards. But, having made a principled commitment to improving outcomes and experiences for those with SEND, the trustworthiness, holisticness, transparency and longevity of the dashboard data may prove to be useful indicators of how far the government values, and is committed to, inclusion.

Rob Webster researches and writes about SEND and inclusion. His book, The Inclusion Illusion, is free to download via UCL Press

You need a Tes subscription to read this article

Subscribe now to read this article and get other subscriber-only content:

  • Unlimited access to all Tes magazine content
  • Exclusive subscriber-only stories
  • Award-winning email newsletters

Already a subscriber? Log in

You need a subscription to read this article

Subscribe now to read this article and get other subscriber-only content, including:

  • Unlimited access to all Tes magazine content
  • Exclusive subscriber-only stories
  • Award-winning email newsletters

topics in this article

Recent
Most read
Most shared