I was quite positive when I first got the Teacher Training Agency's notes and guidance about the proposed mathematics national skills tests for trainee teachers. According to the TTA, the "content of the test will be relevant to the teacher's role, and the majority of the questions will be written in the context of data commonly available within schools which a newly qualified teacher (NQT) could be expected to use".
This reassured me. However, having read the materials, I have two areas of concern: the contexts chosen and the mathematical demands.
It is essential that questions appear in a plausible context, but the sample questions do not. For instance, written questions 17 and 18 refer to a table of data about school absences. The pupil roll of 777 is presented as having been collected over a five-year period. I cannot believe any school would have such a stable roll over such a long period.
Written question 32 tells us that a pupil achieved three different scores in three different tests and then invites the NQTs to judge "in which test did the pupil do best?" Here, the context destroys the purpose of the question, which is fundamentally about converting fractions to percentages or decimals as we can only judge which is "best" on a trivial "highest-is-best" basis.
In reality, who is to say that highest is best? Should the pupil's performance be compared with others? If yes, we need to know something about the others' results. Similarly, should the pupils' results be compared with their previous performance? If yes, we need to know something about past results. If the TTA wants NQTs to calculate the highest score, this is what they should ask for.
The contexts chosen for the questions are, superficially, withi the TTA's prescribed parameters, but if the effect of contextualising distorts the common sense or mathematical demands of the question, then surely this is counter-productive.
I would like to compare results where the same mathematical demands are made but without the contexts. This would show whether or not contextualisation helps or hinders.
As for the mathematics, written question 18 asks: "What is the percentage decrease in unauthorised absences from term one to term two to one decimal place?" The relevant data shows there are 777 pupils on roll, and that there have been 114 absences in term 1, 97 in term 2 and 322 over the whole year.
Given this, we can calculate that the absence rate for term one is 114V777x100, or 14.67 per cent and for term two 12.48 per cent, suggesting that the required answer is 14.67-12.48=2.19, or 2.2 to one decimal place. However, the answer given in the materials is 14.9 per cent. Where did this come from? Well, it is obtainable by calculating the difference between 114 and 97 (17) and then calculating this as a percentage of 114: 17V114x100, or 14.9 per cent. This suggests that the question asked was not the one intended. It is crucial the TTA ensures that such confusion does not arise in the real tests.
There is a lot riding on the outcome of these tests. Pitch the level too low, and the press will say the tests are easy and again jump on the teacher-bashing bandwagon. Make them too difficult and we run the risk of failing large numbers of this year's NQTs.
It is clear that these sample materials are far from right. We must ensure the contexts are relevant, do not lead to distortion and that the mathematical demands are exactly as intended.
Rod Bramald is a lecturer in education at the University of Newcastle upon Tyne