We are all now very used to the notion that the current social policy climate is "evidence based", and that includes education. At the same time there is a recurring complaint that we do not have enough high-quality research evidence to tell us what really works in education, and the role of ICT is no exception. This may seem surprising - after all, apart from that endangered beast the theoretical philosopher, it is very rare for researchers in education to regard their work as "pure". Most active education researchers in the UK have a direct interest in policy and practice, and would no doubt like their work to contribute to the understanding of one or both.
A common reaction to this apparent conundrum is to assume there is a vast volume of education research but that most of it is of poor quality. If only education researchers would learn from their colleagues in healthcare.
In healthcare research, the randomised control trial is the gold standard.
A population of subjects is allocated to a treatment group, one receives an intervention of some kind, the other does not but no-one knows which group they are in. After a suitable period, if there are more in the intervention group with a desirable outcome, then the treatment is deemed successful.
Easy and unambiguous it seems, so why don't we do the same? Surely we can carry out this kind of study on the effect of ICT, then we would know "the answer".
I do not wish to dismiss the difficulties in health research - there are considerable grey areas, as the debates over MMR, CJD and, most recently, HRT make painfully clear. Treatments and outcomes are less clear cut in education but, more importantly, the causal link between treatment and cure is less easy to establish, even with the resources to carry out the huge and often long-term studies routinely seen in healthcare. Annual estimates show the budget for health research is running at 10 to 15 times more than education. This is due in part to the requirement for healthcare companies to trial their products to prove efficacy and safety before they can sell them. I wonder what would happen if educational resource providers had to prove efficacy before they could sell to schools? Of course the price of successful products would go up, and some companies might be threatened, but it would reduce the spend on resources that fail once in school.
One thing that has transferred successfully from healthcare to education is the systematic review. In research there are often thousands of reports looking at the same question. No-one has time to read them all, and even if they did, how can you reconcile the apparently conflicting results? The answer is to commission a systematic review, which is exactly what the EPPI centre (http:eppi.ioe.ac.uk) is currently doing in several key areas of education research. One of their recent reviews looks at the relationship between ICT and attainment. The full reviews themselves are still weighty documents, but they also produce shorter digests written by and for different user groups. These extract the key findings, and point out some of the implications for users. This is a welcome step in closing the communication loop between researchers, policymakers and practitioners which is essential if we are to achieve anything like the goal of evidence-based policy and practice.
Angela McFarlane is professor of education and director of learning technology at the Graduate School of Education at the University of Bristol