University funding is based on the Research Excellence Framework. Last year the government published a plan to create a parallel Teaching Excellence Framework, intended to address the question remaining after tuition fees were introduced – in a ‘free market’, how can students assess whether they are getting ‘value for money’?
The government’s Green Paper envisages four levels of teaching quality, with the government setting the maximum fee that can be charged at each level. A high score in the TEF would give the green light for institutions to raise fees.
This makes it necessary, as Stefan Collini made clear recently in the London Review of Books, to have ways of assessing teaching quality that are precise enough to allow meaningful ordinal ranking of universities.
Collini and other academics pour scorn on attempts to quantify the process of teaching and learning. Anthropologist Stephen Gudeman asks, “Do these prices capture the qualities of excitement, enlightenment, and the opening of new worlds that students experience in class? Do they capture talking to students after class in the hall as we make way for a more efficient use of the classroom’s space?”
Three metrics are proposed: retention data, National Student Survey scores, and data on graduate employment. Universities will also have to submit evidence of how they assure teaching quality and monitor student outcomes. (Collini asserts that none of these measures actually provides direct evidence of the quality of teaching.)
For those observing this from the next-door tenement of secondary education, two thoughts occur.
Firstly, the obvious measure – degree results – is missing, presumably because of the difficulty of comparing results from different universities. Despite attempts to establish comparability, there is simply no agreed way to compare a degree from one university with a degree of the same class from another. The universities set their own success criteria in this regard, so the Green Paper is probably right to avoid poking a stick in that particular hornets’ nest.
Of more interest is the importance that the Green Paper attaches to student feedback. This is fair recognition of the fact that undergraduates are (a) adults, and (b) consumers by virtue of being (delayed) fee-payers. Schools occupy different terrain in this regard – pupils are not adult consumers (and even in independent schools it is not the pupils who pay the fees). Nevertheless, there are enough similarities to make it worth exploring whether student feedback might have greater utility in the assessment of teaching quality in schools.
Many teachers seem to fear direct student feedback, for no very clear reason. Are pupils really capable of evaluating the effectiveness of teaching? Yes. But aren’t they likely to be distracted by shallow personality issues, or recent run-ins with particular teachers? Won’t they favour teachers who go easy on homework? The short answer, supported by research and by experience, is a resounding no.
In baskets of evidence consisting of benchmark tests and subsequent metrics, lesson observations and work scrutiny, feedback by learners themselves offers a balance that teachers should welcome.
Dr Kevin Stannard is the director of innovation and learning at the Girls' Day School Trust. He tweets at @KevinStannard1