What happens if the scales don’t balance

6th February 1998, 12:00am

Share

What happens if the scales don’t balance

https://www.tes.com/magazine/archive/what-happens-if-scales-dont-balance
Within Elgin High, as in most schools, we have developed our own, fairly detailed system of SCE results evaluation based on all available statistics. We see this as the most usefull of several tools available to us in our attempts to ensure that all pupils attain the best grades they are capable of. Any analysis must be made on the basis of accurate and meaningful information and any comparison on criteria which are meaningful because they are fair. However, I am uncertain about some apparent inconsistencies that have arisen.

Percentages of pupils gaining A-C awards at Higher are based on the size of the cohort at the start of fourth year. This penalises schools where a larger percentage of pupils leave at the end of the year. While in general terms it is good that as many pupils as possible stay on beyond S4, this ignores the wide variety in circumstances between individual schools.

In some areas there may be excellent employment opportunities that include training by employers. In some areas there grade may be further education col leges that provide particularly relevant and useful courses for 16-year-olds which articulate into higher education. In some communities there may be a particularly strong employment ethic rather than a continuing education ethic.

Remoteness is another factor which may affect staying on rates.

In the light of these points would it not be fairer to base Higher statistics on percentages of the S5 and S5-S6 cohort at the start of the session for which results are being measured?

The Higher statistic on which rank ordering of schools is based is the percentage of the S4 cohort which gains three or more Highers at A-C in S5.

It would be of help to have an explanation of why this particular one is chosen rather than any of the others. if one statistic only is to be empha- sised why should it not be the percentage gaining three or more Highers at A-C in S5- S6, since this more closely reflects what employers and higher education institutions are interested in.

It would also help discount the efect of differing presentation policies. At present a school that does particularly well with sixth-year pupils is significantly disadvantaged. Again I would argue that these comparisons should be based on the S5 and S6 cohort rather than the S4 one.

Most schools would agree that added value is an important factor. However, as I understand it at present the measurement of added value is to be limited to the improvement between Standard grade and Higher.

If there is confidence in the accuracy of primary schools’ attribution of 5-14 levels in language and number to individual pupils, as I believe there is increasingly, then surely the effectiveness with which a secondary school adds value to the attainment of its pupils is best measured between the position each pupil holds on entry to S1 and the position they hold on leaving school.

To measure it over one year, from S4 to 55, as is apparently the case with the recently issued information for parents, rather than over the five or six years which the average pupil spends at secondary school, seems particularly limited.

Worse than this it could but well penalise the most effective schools. A school that has a very able intake of pupils by which it does badly at Standard grade but well at Higher will be admired for the impressive added value it achieves.

But a school that has an intake of much lower ability by which it does extremely well at Standard grade, achieving better grades and therefore better statistics than could have been expected, has little scope left to squeeze out good Higher results, and will be stigmatised for the apparently poor added value it achieves. That situation must be unacceptable.

In calculating relative ratings between departments in the same school, pupils who are withdrawn from an individual subject or who fail to get an overall award in a subject are excluded from the calculation of that subject’s relative ratings. Is it not inappropriate that a department or subject that does not persist with the most difficult or least able pupils it teaches, but withdraws them from the exam, benefits by having its failure with these pupils excluded from the calculation of its relative ratings?

John Atiken is an assistant headteacher at Elgin High School. The views expressed here are personal.

Want to keep reading for free?

Register with Tes and you can read two free articles every month plus you'll have access to our range of award-winning newsletters.

Keep reading for just £1 per month

You've reached your limit of free articles this month. Subscribe for £1 per month for three months and get:

  • Unlimited access to all Tes magazine content
  • Exclusive subscriber-only stories
  • Award-winning email newsletters
Recent
Most read
Most shared