Confused in a jungle of assessment

12th April 1996, 1:00am

Share

Confused in a jungle of assessment

https://www.tes.com/magazine/archive/confused-jungle-assessment
Steve Ellis looks at the conflicting mix of tests and grading systems that straddle the primarysecondary divide while (below) . . .

Eight years on from the Education Reform Act teachers, children and parents are still trying to get to grips with an assessment system which makes little sense. Teachers are attempting to make accurate judgments of children’s achievements and use appropriate and consistent standards when carrying out these measurements. But what happens in reality when such principles are applied across the primary-secondary divide?

In September l995, I reported on research which looked at how secondary schools used the national test results of their new pupils and what the children thought about it all (TES, September 22, 1995). More than 100 Year 6 children were interviewed in six primary schools, together with headteachers and assessment co-ordinators. The intention was to track the pupils as they move into secondary schools from September l995 onwards. This second phase of the research has now been completed, with the children being re-interviewed in their new schools last December.

Most of the children understandably expressed some excitement and anxiety at the prospect of entering secondary school. Common problems included the prospect of “bullying”, finding your way around, how the teachers would react to individuals, together with having to visit up to 16 classrooms during the course of a week. As one child, Graham, succinctly pointed out: “You’re at the top of your primary school . . . it’s like a big mountain and you slip down to the bottom of it . . . and then you have to climb back up to survive.”

Nevertheless, many of the children had settled well. They had learnt very quickly to manage the day-to-day teaching and learning experiences, as well as coping with the expanded subject base. The most confident children also relished the prospect of using “real” scientific equipment, the art room, and much-improved access to technological hardware.

Many of the children, however, believed that the work was more difficult and demanding in secondary. The children felt that they were covering “mostly new areas”, although in mathematics some reported that they had “stood still” while too much time was spent reinforcing the “basics”. Sadly, this was characteristic of a variety of secondary departments who consistently fail to recognise the value of, or attempt to develop, work carried out in the primary school.

Another radical departure from primary practice reported by the children was the introduction of systematic homework across a wide range of subject areas. All the secondary schools had a homework policy, which generally operated very much on the “banana principle” of waiting for a bus - nothing turns up for ages and then suddenly a whole “bunch” arrives. As Sarah put it: “Some days we get loads . . . some days we get nothing . . . in primary we only got one homework per week.”

Some of this may come as no surprise because much has been written about the management of children across the primarysecondary divide, but very little has been in the context of developing whole-school assessment practice of which national test results at key stage 2 forms part. How do Year 7 pupils cope with the assessment demands put on them as they move across the divide? Is the assessment message the same for each departmental “Balkan state”?

At the time of the second interviews the children perceived that they were to be allocated ability levels at the end of Year 7 (invariably for the core subjects) and teacher assessment had consequently taken on new meaning and importance. However, the key question is what sense did the children make of teacher assessment at whole-school level? Had individual schools developed a consistent formative assessment message? Or were subject departments left to develop individualised approaches?

The evidence suggests that schools are at very different stages in their assessment development. Some schools have a clear centralised policy of formative assessment encapsulated in a “working document” for developing whole-school assessment policy. Where this is the case, children were clear about the assessment criteria and levels of performance required. Other children were, however, left to make sense of a plethora of assessment practices which largely depended upon the penchant of the teachers in a particular subject area. This might include anything from raw scores, grades, levels, percentages and a variety of “scribbling and annotation”.

When the children were asked what sense they made of such a system one girl, Susan replied that she “didn’t even try, each subject has their own method of assessment. I get an A from one and an A1 from another subject. I’m not sure whether that’s the same or something different.”

Senior managers were also interviewed about the impact of the KS2 test results on curriculum organisation and planning in Year 7. Some schools had used the results to “band” children into broad ability groupings, while others ignored the levels and allocated children to mixed-ability classes. The overall pattern was further complicated by some departments - particularly maths - using internal testing to band children by the start of the spring term. Overall, the “core” departments felt that the KS2 test results were not a good indicator of children’s ability in these areas, and that the tests were largely unreliable and inconsistent.

Last summer, the children were convinced that the test results would be used by their secondary schools for streaming and setting. When asked about this issue in December, they were confused and found it very difficult to make much sense of the system. Several children were angry and frustrated at finding themselves in “set 8”, despite having achieved 3s and 4s in their SATs. Inside a term, these children had become disaffected with a system which labelled them as failures. For others, the SATs were in the distant past. They had moved on into a secondary world where they had to cope with demanding subject tests and testing.

Perhaps this is just as well, because the tests that the children sat last summer were inconsistent in terms of the degree of difficulty between the subjects. Similarly, the levels were defined in percentage terms, some children just missing a particular level by a minimal score. The system is clearly fundamentally flawed.

At secondary school level, there is a commitment to making the “test experience” for the child as painless as possible. The focus for teachers is as Caroline Gipps (TES, January l9) points out, “informal classroom assessment. . . to help pupils understand what they can do and what they need to do next to improve”, not national testing.

We must not lose sight of this as the Government surges ahead with its “testing agenda” (the latest development being optional SATs at the end of Year 4). The children should remain at the centre of the assessment debate and we as teachers need to come back to the basic questions about the purposes of assessment. What still remains unclear are the formative uses to which “testing” can be put in pedagogic terms at both key stages 2 and 3.

Steve Ellis is a senior lecturer in education at Manchester Metropolitan University’s School of Education (Crewe)

Want to keep reading for free?

Register with Tes and you can read two free articles every month plus you'll have access to our range of award-winning newsletters.

Keep reading for just £1 per month

You've reached your limit of free articles this month. Subscribe for £1 per month for three months and get:

  • Unlimited access to all Tes magazine content
  • Exclusive subscriber-only stories
  • Award-winning email newsletters
Recent
Most read
Most shared