What do reading assessments really tell us?

Reading assessments can lead schools to think that an intervention hasn’t worked, but that’s not always the right conclusion, says Jessie Ricketts
7th December 2023, 3:57pm
What do reading assessments really tell us?

Share

What do reading assessments really tell us?

https://www.tes.com/magazine/teaching-learning/secondary/reading-assessments-student-need-impact

For the last decade, secondary reading, and especially key stage 3, has been a focus of my research. So, when the Department for Education recently published its updated reading framework, I was thrilled to see that the importance of reading at this phase of schooling is emphasised.

Over the years, I’ve seen excellent examples of primary and secondary schools working together to understand what they have in common and what needs to be handled differently in the primary and secondary contexts.

One observation that often emerges from these discussions is how reading needs that emerge in primary can change in shape as young people move into and through secondary. Needs are cumulative and spread beyond reading over time.

I’m interested to see how this emphasis on secondary reading will play out in schools and in policy over the coming months and years.

However, there’s one area of the reading framework where I feel some clarification is warranted - and that’s assessment.

Reading assessments: what do they tell us?

The framework resonates with my experience that careful use of additional assessments can help us to identify needs with confidence and precision, and align those needs with support and intervention, in a targeted way.

Assessments can also be used to evaluate the impact of reading approaches in schools. Overall, they can help us to allocate precious resources and inform decision making and changes to practice. Of course, though, we don’t want more assessment for assessment’s sake.

I have been working with a large number of schools since the framework was published and, through these conversations, have discovered a few points that are worth clarifying, if assessments are to be used to best effect.

What do reading assessments really tell us?


Firstly, an important distinction can be made between criterion-referenced and norm-referenced assessments.

Criterion-referenced assessments tell us whether a student has met a certain target within the curriculum. These are aligned with our learning objectives in primary school and secondary school. They tend to be assessments that all pupils do, like the phonics screening check, KS2 Sats or GCSEs.

Norm-referenced assessments, on the other hand, tell us how a particular student is doing in relation to their peers. These assessments have been developed and tested with large groups of individuals to establish what the range of performance is for a particular age range.

This is normative data and we assume, as with height and weight, that reading is normally distributed. When we administer the assessment to a student, we can use their score and age to take away a norm-referenced or standardised score.

These scores often behave like IQ scores, having a mean of 100 and a standard deviation of 15. The expected range for any age group is typically defined as within one standard deviation of the mean: between 85 and 115.

How to use reading assessments to determine impact

Now, some assessments act as screeners. These are administered to all students and give us a temperature gauge of our cohort, and a starting point for identifying needs.

In order for these to be feasible, they are usually administered in groups. So, getting a low score could indicate a reading need, or it could reflect something else, like inattention or disengagement.

Therefore, diagnostics - assessments that are administered to a small group of “at risk” students and usually one-on-one - are needed to confirm reading needs.

These tests can also be used to work out what kind of reading need a student has: for example, do they struggle with word reading, reading comprehension, or both?

I have worked with Right to Succeed to develop a decision tree that exemplifies this approach, as discussed previously in Tes and in this paper.

Norm-referenced assessments often generate reading ages, as well as standard scores (or standard age scores). These are not to be trusted, as Megan Dixon, myself and others have previously argued.

What do reading assessments really tell us?


But there’s another challenge for schools that I have also observed over the years.

I have worked with many schools that are doing an excellent job of identifying reading needs and have put in place evidence-informed support and interventions.

They found a good impact within their school; the approaches came with challenges but, ultimately, were feasible and acceptable with appropriate professional development, funding and support.

Then they use a norm-referenced assessment to evaluate impact for their students and scores don’t increase. Conclusion: there has been no impact, it didn’t work, so what do we do now?

But is this the right conclusion? It might not be, for at least two reasons.

1. Scores staying the same can indicate impact

If we are using standard scores (and I think we should be), then scores staying the same over three months, six months, or even a year of intervention indicate that this student has made expected progress over this period.

For some students, this is a huge success, especially if before the intervention, standard scores were going down, which would mean they were falling further behind their peers.

2. We might not be looking in the right place

Sometimes, what I see is that the main impact measure is a global assessment of reading - let’s say reading comprehension. Perhaps what we have done is vocabulary work in the classroom and phonics work outside of the main classroom with some individual students.

When we don’t see impact on our “distal” reading comprehension assessment - by that, I mean an assessment that is not focussed on the specific learning we have delivered - this doesn’t necessarily mean that what we have done hasn’t worked.

What we need here is a “proximal” assessment that tracks knowledge of the specific vocabulary or aspects of phonics that we taught.

We do need to see impact of these proximal assessments. However, any impacts here may take time to generalise to our wider reading comprehension assessment. This should therefore be treated as a less realistic goal and an added bonus when we see it.

If we are to be able to celebrate the successes that we are having, we need to take care in developing an assessment framework that includes both proximal and distal measures of impact.

Professor Jessie Ricketts is director of the Language and Reading Acquisition lab at Royal Holloway, University of London

You need a Tes subscription to read this article

Subscribe now to read this article and get other subscriber-only content:

  • Unlimited access to all Tes magazine content
  • Exclusive subscriber-only stories
  • Award-winning email newsletters

Already a subscriber? Log in

You need a subscription to read this article

Subscribe now to read this article and get other subscriber-only content, including:

  • Unlimited access to all Tes magazine content
  • Exclusive subscriber-only stories
  • Award-winning email newsletters

topics in this article

Recent
Most read
Most shared