Q Can a junior school whose value-added is less than 100, get any more than a satisfactory? Is this compounded by an infant school that obviously inflates results?
A Value-added data is an important indicator but it is certainly not the only thing that inspectors will be looking at even to judge achievement. If school leaders think the value-added data is giving a misleading picture, then they need to make this clear to inspectors at the outset for example, in their self-evaluation form. Inspectors expect to be able to rely on the value-added data in the absence of contrary information, but this data might give a misleading picture in, for example, a small school with low numbers in each cohort. In a school where pupils have very high key stage 1 scores, inspectors will be aware of the "ceiling" effect caused by the fact that it is not generally possible to exceed level 5 at key stage 2. It is unhelpful to assert that a feeder infant school "obviously inflates results" because the inspectors cannot be expected to pass judgment or comment on schools other than the one they are inspecting. If you believe your feeder school's results are inflated, then you should be doing something about it immediately rather than simply raising it when the inspectors call. If you have evidence, it should be discussed with the infant school and, if necessary, raised with your local authority, who will take responsibility for moderation.
Q Why do inspectors seem to prefer tracking average point scores to sub-levels that pupils reach?
A Inspectors should not be imposing their personal preferences on schools. It is not their job to preach but to check on whether or not what the school is doing works. Provided your tracking systems and target-setting are rigorous and incorporate appropriate challenge, then I am sure that sub-levels are a perfectly acceptable way of working. It is just worth bearing in mind, however, that the national comparisons will mostly use average point scores (APS). APS and sub-levels are directly convertible, so there should be no problem in relation to individual pupil tracking. The difficulties can occur when comparing overall performance. For example, a school with 100 per cent level 4 at key stage 2 may seem to be doing very well indeed, but its APS may be unremarkable if there are no pupils attaining level *