External assessment of GCSE English is in chaos - and that's no surprise given the theory which lies behind it, says teacher Iain MacDonald
THREE weeks ago, driving home on the last Friday of a long and tiring term, I caught the evening news. There were three quite different items of considerable moment: NATO bombs were once more falling on Baghdad, the United States was on the verge of impeaching the president, and chief inspector Chris Woodhead had made a public pronouncement with which, for once, I fully concurred. The national curriculum tests were declared unreliable.
As head of English in a large comprehensive school, my own experience of testing over recent years has been alarming, but not, I fear, atypical. Our GCSE results hover around 75 per cent A*-C, and we usually manage about 20 As in English. Translating back to Year 9 suggests a rough expectation of a similar number of level 7s, or perhaps a few more, at the end of of key stage 3. Only once in the past four years have we landed within a dozen of this figure.
Two years ago, the year of the grudgingly-acknowledged nat-ional marking cock-up, we were awarded a princely two level 7s. Results at the other end of the ability spectrum were equally incongruous. We spent about 15 working hours trawling through the scripts, selecting the most blatant examples of cavalier marking and highlighting dozens of clerical errors in the process. The head penned a number of blistering letters to relevant officials, and we sent the whole lot back for re-marking. We were rewarded, if that's the word, with an increase to four level 7s. That cohort has just completed its Year 11 mocks, and my colleagues and I are predicting around 25 As - about the norm for this school in recent years.
Last year, come teacher assessment time, you can imagine what we did. Out came every piece of exemplar material we could lay hands on. Last year's papers were scrutinised to establish exactly what did have to be done to net a 7. Descriptors were dissected and analysed for the all-too-elusive discriminators. Invariably, in the case of borderline students, the lower level was selected in the interests of avoiding the previous year's somewhat embarrassing over-assessment.
A week later the marked scripts arrived in school, without the promised levels enclosed. (It was the first harbinger of the national administration cock-up.) Our key stage 3 co-ordinator totted up the levels with the help of a photocopied set of grade boundaries coaxed out of Glasgow via the fax. None of us would believe her first tally of 77 level 7s. On a recount it proved to be 78.
My first, considered, reaction was that performance-related pay might have its merits after all. Then I chewed it over. What would happen in two years' time? By the logic of Bichard, Barber and Blunkett we'd be lined up for a target of about 60 As and a 95 per cent A-C hit rate in the year 2000.
You could dismiss this, and no doubt some will, as either a rogue set of results or at best anecdotal evidence. But really, should we be surprised that the external assessment of English in its current form is such a hit-and-miss affair? Consider the following:
* The national curriculum attainment target descriptors are hopelessly woolly. Take these three descriptors for structure in writing "ideas are organised into paragraphs"; "simple and complex sentences are organised into paragraphs"; "paragraphing is used to make the sequence of events and ideas coherent to the reader".
These come from three different levels but I defy anyone who hasn't read the script to rank them.
* National tests insist on assessing "response to Shakespeare". I don't know an English teacher who objects to sharing the joy of Shakespeare with students, provided they can do it in an appropriate way for the ability of the group.
The style of the tests and league-table pressure, however, effectively drive teachers to second-guess the questions and coach students accordingly.
Our students were all given a detailed revision sheet for what we thought was more or less the only question that could be asked on one of the scenes. Notwithstanding a small degree of spin, it duly came up. Thus "response to Shakespeare" became a combination of "ability and willingness to revise" and "did your teacher guess correctly?".
* The marking procedures are lamentably flawed. Quangos can moderate till the cows come home, but as long as markers are only required to tick the bottom corner of a page of A4 to show that it's been seen, the application of levels will continue to be hopelessly haphazard.
What English teacher would summatively assess a 500-word piece without at least highlighting the basic errors or achievements? What head would tolerate such an assessment policy at any level? In last year's crop I have countless examples of apparent disregard for the most obvious requirements of the level descriptors, dodgy as they are.
So where do we go from here? Firstly, the Qualifications and Curriculum Authority needs to acknowledge that as a benchmark, the current English tests are as much use as a sand line in a sirocco. Then, the level descriptors as they appear in the 1995 English order need to be scrapped and rebuilt in a form which encourages teachers to apply rather than avoid them. Lastly, if we must test, let's be honest as to the limitations of those tests, acknowledging the superiority of teacher assessment as an indicator of the student's progress in this most rich and complex of subjects.
Iain MacDonald is head of English at a West Midlands comprehensive Noim-opin-3 'Should we be surprised that the external assessment of English is such a hit-and-miss affair?'