The sad, slow death of intuition
In Yevgeny Zamyatin's chilling dystopian novel We, dreams are thought to be symptomatic of mental illness, citizens of "One State" are known only by numbers and every instrument of government is ruthlessly geared towards measurable productivity. To this end, the emotional and imaginative brain functions of D-503, the protagonist, are surgically destroyed by X-rays when it appears to his political masters that he may be developing a soul.
Is education drifting towards such murky waters? We are not there yet. But the modern obsession with evidence-based practice and measurable outcomes is blurring the purpose of education and undermining authentic and laudable motivations for entering the profession. What scope is there for such intangible factors as spontaneity, creativity or relationship-building in teaching today? Are we witnessing, if not facilitating, the death of intuition in teaching?
No one would dispute that there has been an increasing emphasis on evidence-based practice over the past five years. There are now multiple initiatives aimed at connecting the educational research community with "practitioners" (formerly known as teachers). For example, one well-respected academic institution advertises a course called Leading Research and Development Within and Across Schools - it is no doubt well-intentioned and perhaps even balanced in its content, but doesn't the title make schools sound like pharmaceutical manufacturers?
This "medicalisation" of education is an explicit aim in some parts of the research movement. The interface between medical research and medical practice has long been held up as an exemplar to the teaching profession. The academic research community works through randomised controlled trials towards an increasingly refined understanding of certain interventions, which, in turn, informs the decisions made by doctors on the ground. Surely, we are told, schools should do the same with evidence from educational research.
We are certainly heading in that direction. Schools have begun to appoint internal heads of research, following a US trend. Harvard University runs a programme named Research Schools, in which students of educational research methodology team up with real schools to put theory into practice. The recent ResearchED conference boasted a line-up of leading educational lights and a full (fee-paying) house. Educational research - and its proposal that we use data to draw conclusions about how people learn best - used to be a theoretical affair for universities. It now brings confident (and sometimes institutionally enforced) recommendations for how teachers should do their jobs.
Take the cult of John Hattie and his famous meta-analysis of more than 50,000 studies on educational interventions. His influence continues to grow, as does his ever-shifting list of rank-ordered approaches. Indeed, Hattie's work has spawned a host of books and training courses, which are eagerly adopted by senior managers and policymakers seeking "actionable findings" to inform their strategies.
Beware brain power
There is, of course, an entirely understandable human tendency to lend weight to those beliefs or practices that are supported by evidence. But we know that we are not always right to do so; the quality of evidence can vary wildly. It is endlessly susceptible to bias and error and is rarely presented in a way that is value-free. It is also often used to demonstrate the plausibility of a pre-existing hypothesis, which may have been influenced by political or ideological drivers.
We know, too, that it is a normal part of human psychology to emphasise the evidence that confirms our presuppositions, wherever they come from. And we are easily hoodwinked; an intriguing US study of postgraduate neuroscience students found that they were significantly more likely to agree with a scholarly article's conclusions if the piece were accompanied by a picture of a brain. Follow-up studies found that articles on any subject were rated more highly by readers if they contained a reference to neuroscience or a picture of a brain, whether or not that reference contributed to the thrust of the argument. A simple juxtaposition of science and brain imagery, even when mischievously and irrelevantly inserted, causes an article to be considered more plausible. (As I write, I notice that one of my well-thumbed Hattie books has 15 brains on the front cover.)
There is also the notorious difficulty of determining courses of action from facts. As philosopher G E Moore famously wrote: "You cannot derive an `ought' from an `is'."
There is no logical connection between data and human actions based on that data. For example, just because Shanghai gets numerically high maths results in the Programme for International Student Assessment tests (fact), it does not follow that we ought to import their maths teaching strategies wholesale without further reflection (value judgement). Such an approach would fail to take account of deeply embedded cultural differences and of the possible cultural costs of such a switch, quite apart from any concerns about the purity of Shanghai's data.
What should we infer about our own judgement from this? That as individuals we are hopelessly and irredeemably susceptible? Or that we should trust our instincts and be cautious of evidence that contradicts it?
Do we see what we want to see?