Across the test divide

22nd September 1995, 1:00am

Share

Across the test divide

https://www.tes.com/magazine/archive/across-test-divide
How will secondary schools use the national test results of their new pupils? And what do the children think about it all? Steve Ellis reports on his research.

This September the first children to have been immersed in the national curriculum since they started primary school moved on into the secondary sector. These “national curriculum beings” carry with them their test results from Year 2 and the more recent results or levels from Year 6.

This raises some important questions for both primary and secondary schools. What information, for example, can primary schools provide about them? What information do secondary schools require?

And, perhaps more importantly, what use will be made of this information? How will secondary schools use the test results at key stage 2? Will the results be used as “performance indicators” to set and band children into ability groups at secondary level, and how will this be achieved?

Paul Black, in his summer lecture at King’s College London (TES, July 14) indicated the problems of using test data which are of limited reliability and the way in which simple, short tests dominate our educational thinking. Visits to secondary Year 7 co-ordinators and conversations with primary headteachers at the end of last term suggest Professor Black’s concerns are well-founded.

Professor Black called on the Government to invest in research about testing and changes to the curriculum. My own research sought to explore assessment issues with both teachers and children. More than 100 Year 6 children were interviewed, in six primary schools, together with headteachers and assessment coordinators. The intention is to track the pupils as they move into secondary schools from this month onwards.

Much has been written about the disruption the tests caused to primary life, but very little on the effects of testing on 10 and 11-year-olds. What are children’s perceptions of teacher assessment? How did the children cope with the tests? How do children perceive success and failure in assessment terms at an individual level? What children feel, think and experience in assessment terms from within what is essentially an adult (Government) imposed system, should also be part of our research agenda.

The analysis of the data is at an early stage but preliminary examination reveals that children are generally comfortable with in-house assessment, although there is inconsistency in terms of teacher marking. This has sometimes led to confusion in assessment feedback.

Second, children were generally unaware that they had been following the national curriculum for the past six years, although there were some exceptions where the use of “national curriculum speak” was well developed. Many children, however, were aware of national testing arrangements from the summer of 1994, especially where the Year 6 children attempted the KS2 pilot tests that summer.

Third, the children were asked how they had been prepared for the tests both in school and at home. In some instances, intensive preparation in class (weekly tests, homework, mock examinations, the use of the 1994 pilot papers) began in the spring term 1995. Often this was a co-ordinated approach involving groups of Year 6 teachers and senior management.

This pattern was mirrored at home with some children engaged in systematic revision with the help of parents, sisters and brothers. In other cases preparation was less intensive, over a shorter period of time and the children were less likely to engage in “test talk”. They were also generally unaware of the benefits of specific revision and some children simply resorted to library searches for unrelated materials. This lack of revision technique is worrying, with the child seeking an inappropriate refuge because of anxiety about the tests.

Fourth, all the children interviewed experienced considerable anxiety before the test. Feelings were most intense the weekend before, and particularly on the Monday of the first test. Some children cried during the tests. Children felt most threatened during the maths tests, and there is some evidence that many suffered mental burn-out as they grappled with the varying levels on the papers. However, they all agreed that testing was a good idea. Their sources of information for this view ranged from the TV to parents and friends.

Finally, the children also felt that the results would be used by secondary schools to band them in some way. They were generally quite comfortable with this idea, although there was confusion about how this was to be achieved. Similarly, children were unclear about what a level is. In achievement terms many were unable to indicate what would be considered a good level of achievement (level 4 is the norm). Some children were aware of discrepancies between teacher assessment (many of the cohort had been underestimated by their teachers), and the test scores.

Some senior primary managers and Year 7 co-ordinators in secondary schools were adding up each child’s test scores to produce a raw total (three level 5s equals 15 points) and using this information to allocate the child to a particular band.

This is despite the acknowledged broad nature of the levels, and the inconsistency of the test papers across the subject areas (mathematics is deemed to have been more difficult than science).

Furthermore, there does not appear to be consistency between levels at key stages 1 and 2. One headteacher recently gave a confident Year 2 child (recorded as having achieved a level 3 at KS1 mathematics) the level 3 tasks at KS2. He said: “The child was, quite simply, unable to access the tasks on the KS2 paper. What does this indicate about consistency, reliability and continuity of testing between the key stages?” The problem was raised at a recent School Curriculum and Assessment Authority conference, and the head involved is writing to the examination board about his concerns.

Some Year 7 co-ordinators are attempting to make provisional band allocation on the evidence of teacher assessment, adjusting these placements using a mixture of raw percentage score for the tests (supplied by some primaries) and the broad levels (top, middle or bottom, level 4) supplied by others. One secondary co-ordinator said: “We want what is best for the children and to make the best possible use of information from the primary feeder schools. This is proving difficult.”

There is much still to be done if we are to fulfil some of the broader assessment implications of Paul Black and his colleagues’ intentions for national curriculum assessment.

The 1987 Task Group on Assessment and Testing envisaged that the teacher would, on the one hand, focus on the use of diagnostic and formative information about an individual’s achievements and, on the other, on evaluative information about the educational system. The tension between the summative and formative purposes of assessment remain.

Unfortunately, it is the children in the system who will count the cost, as mistakes are made while they move across the divide from primary to secondary. For the children, “test anxiety” may well have subsided. However, they carry with them into their new secondary schools the results of the first summative national tests since the 1970s.

How this new information will be used is a question which requires careful handling as the children make the transition.

Steve Ellis is a senior lecturer at Manchester Metropolitan University

Want to keep reading for free?

Register with Tes and you can read two free articles every month plus you'll have access to our range of award-winning newsletters.

Keep reading for just £1 per month

You've reached your limit of free articles this month. Subscribe for £1 per month for three months and get:

  • Unlimited access to all Tes magazine content
  • Exclusive subscriber-only stories
  • Award-winning email newsletters
Recent
Most read
Most shared