National testing hasn't made the grade
Educational fashion usually operates on a cycle of 10 to 15 years, thereby permitting the latest generation of self-styled experts to imagine they are creating new ideas. The surprise about targets is that they have just completed a three-year session.
So, without a pause for breath, we have to dig out our notes on key factors to raising attainment and recycle the overheads for the next in-service. We shall exhort our teachers to yet greater efforts in the cause of Scottish education by reminding them of their previous achievements.
And there's the problem. Primary school achievements are expressed in the percentages of children who attain the expected 5-14 levels for their age in reading, writing and maths. The tests do not stand alone but "confirm" levels already established in classwork, one of the many fictions attached to national testing.
Another fiction is that the levels are an accurate reflection of attainment. Most teachers know from experience not to place much weight on a child's national test levels. They have seen too many newcomers from other schools, boasting levels which do not stand up to scrutiny. Secondary schools will know this feeling too, about some S1 entrants.
And it's not only writing - a notoriously difficult area for consistent assessment - but reading and maths also. Individual schools may have established a common understanding of levels but it's a complex business and there are still many teachers whose 5-14 assessment is too variable.
National tests are also easy to manipulate. Not by pupils, but by teachers. And how would I know? Because I've done it. By accident, of course, and I will deny this statement if anyone tries to bring charges.
It was my fondness for my own voice that started it. I found myself reading aloud questions and possible answers with such expression that I was emphasising the correct answer unconsciously. The intelligent members of my audience quickly took advantage.
Then, my celebrated ability to read children's work upside down led to my finger pointing to a mistake in a lingering fashion. To anyone else, I was in the wrong position to see the paper but the child knew that the answer was worth checking. I have heard of others who can signify an error with a cough. And there's the teacher who told an unsuccessful pupil: "Come back and do the test again tomorrow."
It's simple enough to tighten the procedures. Forbid the opening of packs until the papers are required, have the test supervised by someone who does not teach the pupils and expect heads and teachers to abide by testing etiquette.
National tests are an unreliable barometer of achievement. Even when the teacher plays fair, the levels are too broad to mean much. Standardised tests have been available for years. They give a more reliable picture of a school's performance and a more accurate measure of a child's attainment against the whole population of the relevant age group. The tests have undergone rigorous trials, too, as proof of their statistical value. No one can make such claims for national tests.
We use standardised results to assemble our reading groups. A national test level C at primary 6 can equate to a standardised score of 90 or 115 or anything in between. Children at either end of the scale have different reading needs. Indeed some would be disadvantaged if they all were placed in the same group. The more accurate standardised score helps focus teaching more effectively.
If the efforts of pupils and teachers are to be judged by the extent to which our schools' attainments are raised, can we have a fair, consistent and reliable means of doing so?
Brian Toner is head of St John's primary, Perth.