The moderation system described by John Green appears to be very similar to that which was used to guarantee standards for coursework. However, feedback to pupils will be limited to a series of ticks in red ink and marks for each question. This seems rather paltry compared with the marginal comments and summary comment that most pupils are used to with coursework. How such a snapshot test could really be used for "diagnostic" purposes I cannot imagine.
Andrew Watts was more discerning about the benefits of teacher assessment but seemed to go astray on the issue of reliability. Any single snapshot test is unreliable, no matter how much trialling it has had. Only a series of tests over a period of time can be considered truly reliable in the sense that they can average out misleading variations in performance. I am not suggesting that pupils should be condemned to a treadmill of national tests - rather that properly moderated coursework would be a much better guide to pupils' achievement.
Neither is standardisation of test content a guarantee of reliable results on a national basis as standardised tests can unintentionally discriminate against particular groups of pupils. On a national basis standards should be monitored by sampling: it is a more manageable and appropriate method of establishing whether levels of achievement are rising or failing.
What I would not dispute is John Green's comment that "the whole process is a massive undertaking". It is a pity that the resources of time and money which are being poured into this process are not being better spent on a fairer and more reliable system.
ANDY GIBBONS Organiser Campaign for Raising Standards in English 136 Trevelyan Road London SW17