This is no measure of improvement

24th October 1997, 1:00am

Share

This is no measure of improvement

https://www.tes.com/magazine/archive/no-measure-improvement
Ralph Tabberer finds flaws in recent research on literacy and numeracy.

There was considerable publicity earlier this month for educational researchers at Manchester University. Their work, undertaken within a single local education authority, suggested that implementing the national curriculum has failed to improve national standards in literacy or numeracy since 1988.

If true, this would be an important finding because there have been few studies examining standards over time, and it has been difficult to judge the full impact of national curriculum reform.

But in fact, the Manchester team has produced data on only five schools, randomly chosen within one LEA. Based on the results of cohorts of pupils taking the same test each year, they have concluded that there has been no change.

The tests used are limited, and certainly the reading test is no model of how to assess comprehension. It was designed and standardised well before the national curriculum arrived, so has not been adjusted to reflect what is now taught.

There is no evidence that the team checked the representativeness of their pupil sample against national patterns - or that they checked on area patterns which would affect recruitment to the schools. Indeed, the main paper which lies behind the headlines has proved to be unrefereed, and not yet accepted for publication.

This is at best a very limited addition to what we know. Does it demonstrate that standards cannot improve? Surely not. And I believe that it is dangerously complacent to draw on such a limited evidence base to suggest otherwise. There has been valuable work undertaken elsewhere to examine effective and improving schools, which this study ignores. A key underlying question is: Can our education performance get better? Simply ask the inspirational heads and teachers in the thousands of improving schools across the UK. If we can improve in so many individual schools, there should be nothing to stop us improving standards nationally. What does the other evidence have to say?

Admittedly, research has struggled with the question of standards changing over time. In the 1970s and 1980s, the Government monitored standards through its Assessment of Performance Unit (APU). The programme was troubled at one stage by doubts about the statistical model on which comparisons of pupils over time were based. In 1992, however, the National Commission on Education asked three researchers at the National Foundation for Educational Research who were involved in the APU to summarise the overall findings for literacy and numeracy. They were also asked to give a view of the available evidence on changing standards since 1945. They reported cautiously that standards had changed very little .

This scotched the pet theories of those who are forever quick to tell us that education in the 1950s, say, was much better than it is now. But it left hanging in the air the question of whether we could ever measurably improve on a national scale.

In the past five years, there has been some important fresh evidence of improving standards. For example, pupil performance at GCSE has improved, and last year, when the Office for Standards in Education and the School Curriculum and Assessment Authority examined past test papers, they concluded that improvements could not be put down to easier tests.

Another example has been the programme of reading research carried out by the NFER, which has traced a rise in Year 3 standards throughout the 1990s, after a small fallback at the very end of the 1980s. And, most recently, most of the key stage test results have shown improvements - first between 1995 and 1996, and again between 1996 and 1997.

As the new Qualifications and Curriculum Authority (QCA) has been quick to point out, tests are extensively piloted in advance. Teacher assessments have also improved, or are the researchers arguing that we should disregard teachers’ professional judgments?

There has been a concerted effort from policy-makers in the past few months to seek a more open dialogue with researchers and to bring better evidence to bear on central decision-making. It is therefore galling to find the Manchester research team chasing the media above the message. We rely on members of the profession to provide a firm evidential base and for measured conclusions in keeping with the scale and scope of their studies.

If they do so, researchers may have much to offer to the policy and standards debate. And then, we promise to listen.

Ralph Tabberer is at the DFEE standards and effectiveness unit.

Want to keep reading for free?

Register with Tes and you can read two free articles every month plus you'll have access to our range of award-winning newsletters.

Keep reading for just £1 per month

You've reached your limit of free articles this month. Subscribe for £1 per month for three months and get:

  • Unlimited access to all Tes magazine content
  • Exclusive subscriber-only stories
  • Award-winning email newsletters
Recent
Most read
Most shared