I read your last reply with interest.
I am in no doubt that it is the link between exam results and accountability that adds to their dominance. I say “adds to”, as a huge part of that dominance comes from the overdeveloped sense of responsibility that most teachers have. I clearly recall the commitment to get the best possible results for my students and the desperation to see how they had done as soon as results became available. I wanted to give them the currency that would offer them the biggest possible range of choices for their next steps and I think most teachers share that priority.
Unfortunately, the reality is that they don’t always make the best choices. Some of the students whom I worked hardest with and for whom I did most in subverting the system to get the best results were the first to fail in higher education.
All of this came back to me forcefully when I was working in a school recently where the ambition was to get more A-level passes. They were completely disconcerted when I asked them if they were convinced that this was in the best interests of the young people. I accept that they had the best of intentions, but the priority was to get the “best” exam results by their standards, not the best and most appropriate set of experiences and qualifications for each young person in their care.
There is so much pressure on schools from their own ambitions for young people, the constant rhetoric based on high attainment and the centrality of exam results to accountability. It is hardly surprising that some seem to use the children to serve the data
I am minded of the comment that Ian Gilbert, author and driving force behind the Independent Thinking group, made on Twitter: “Always use the data to serve your children, and never the other way around”.
One of the results of that is the phenomenon you observe of multi-disciplinary work being strongest further away from the critical periods of exam preparation. I think that there are different issues in primary schools in any case because teachers are not subject specialists and are less likely to plan learning around a rigid subject framework, so one could argue that this plays a significant part in their use of multi-disciplinary approaches.
Interestingly though. I think that there is evidence that teachers in the early years of secondary schools are also more prepared to try such approaches where they are free from the pressure of external assessment. As you suggest, the effects of accountability are profound.
I am also wrestling with the issues that you raise about data and the role that examination results play in that. It remains an issue in Scotland where I have seen duplicate (word for word) descriptions of leadership in two school reports leading to different grades, because the attainment data limited the level available to one of the schools. We had a situation like this in one of the schools in Stirling when I was director there. A headteacher in a large, complex and improving primary school was rated relatively poorly for her leadership, because “the attainment data would not support a higher grade”. As a result, effective practice was questioned when it should have been encouraged.
As you have suggested, we are also still faced with efforts to create league tables from data, but in Scotland we are in the process of implementing a new data system called Insight. It undoubtedly has many benefits, not least that it is forward looking. It includes destination data and captures information on young people’s next steps from school. In its latest incarnation, it looks at how well that destination has been sustained.
I welcome that idea that we are trying to see where qualifications lead rather than simply continuing to treat them as an end in themselves. There are many other strengths and I did think that one was the increasing difficulty of comparing school with school and the determined avoidance of a league table approach. Instead, schools are given a “virtual comparator”. It is the educational version of an imaginary friend – just like you in most respects, but maybe performing differently.
I am now increasingly questioning that.
In discussion with colleagues who have the task of supporting schools, they are concerned that they do not easily have access to the data that would allow them to do their job. If you think of the London Challenge, data was essential and it was data that could have been used in many different ways. In London, and in the other Challenges, it was used positively to help in the improvement of schools, to identify good practice and to frame partnerships. It could have been used to label schools and inform the market.
I think that we are offered another possibility here in cross-border learning. Is the approach that you describe in England a better one, despite the faults that you clearly identify? Is there more to be learned from the emerging Scottish approach?
Perhaps, yet again, we need to be looking to see how elements of both can be combined.
I hope you don’t mind, but I am going to write again to you about this. It will disrupt the symmetry of our correspondence, but I am on a roll!
All the best,