Let usefulness be our yardstick for research
There is a whole education-research industry out there clocking up points for the next assessment exercise. With about 3,000 education lecturers each trying to notch up their four papers, this means at least 3,000 contributions to understanding per year, at a cost in salaries alone of about Pounds 30 million. Yet the astonishing thing is that what is written is almost never read by the people who really matter in providing education - the teachers and the policy-makers.
Academics are inclined to put this down to torpidity on the part of practitioners. But the neglect I suspect points to something more fundamental. In my view much of the time the researchers are barking up the wrong paradigm.
Educational research has modelled itself on scientific research. The researchers develop concepts, attach numbers, statistically analyse, build theories and write papers. There is a sedate process of peer review and publication in prestigious journals. All this works wonderfully well when it comes to making sense of the physical world. But education is about people, not rocks or plants.
Education is a complex human activity not easily represented in numbers. It is person-made and can be changed virtually overnight by the decisions of policy-makers. It is essentially a matter of values, and people are often less concerned about what is happening than what ought to be.
It is easy to put numbers on anything. You can wander around the National Gallery rating the paintings on a five-point scale. But will the numbers have any meaning? With numbers, there is no limit to the statistical analysis you can do, but however good the technique, it cannot create meaning where little or none existed. Disputes about the value of intelligence tests are sometimes of this kind, but the problem is more apparent when attitudes are being "measured". The Confederation of British Industry, for example, claims that 98 per cent of employers favour competence-based vocational qualifications, but cannot explain why so few have adopted them.
Scientific research works because it is checkable against a relatively enduring external reality. In the case of education, "external reality" can change very rapidly. The lengthy process of publication can mean that the findings emerge when they are only of historical interest. Current papers on the Technical and Vocational Education Initiative and the 1988 national curriculum are examples. By the time present work on vocational qualifications appears, they will have been reformed. Ways have to be found of capturing and communicating a fast-changing world.
At best, research can tell you "what is", not what "ought to be". Educational research sometimes pretends to be able to go beyond description to prescription. Many of the terms used, often collected from psychology or sociology, have both descriptive and evaluative meanings. When "maturity" is measured by questionnaire, research which is perfectly descriptive may give the impression of judging worth, since in the conclusion the familiar evaluative meaning is present. The same applies to "achievement", "team- playing" and "authoritarian". Slippage often occurs because it is almost impossible to use a descriptive term about people and education without its having some value content. Evaluation itself is a bastard discipline requiring researchers to judge worth as well as collect evidence.
All this can leave education research findings in a weak position vis a vis prior value commitments. Our everyday experience is that the Earth is flat and still, and it is hard to accept that it is a small lump of rock hurtling through space. Yet the evidence forces us to accept that it is. In contrast, in education the evidence is nearly always subordinated to value positions. There is so much emotion attached to streaming or mixed-ability teaching, for example, that only very strong evidence is going to persuade protagonists to think differently, and that evidence is not always obtainable.
So can education be systematically studied? Emphatically yes. But it has to be clear about what it can and cannot do. It can provide accurate descriptions but it can never tell you of itself what ought to be done. It can contribute evidence toward decisions, but never make them.
For the descriptions it provides to be useful, they must be checkable. Where they attempt to be quantitative the numbers must be assigned authentically. The research must also be relevant, and the findings must be published promptly in forms which are accessible to teachers and policy-makers.
It is good to check on the quality of research in our universities, but in the case of education the assessment exercise is in danger of reinforcing an inappropriate paradigm. People who have committed themselves to a life of teaching and training teachers are now finding themselves bullied to produce papers in the old mode. The Higher Education Funding Council has said it is more concerned about quality than quantity, but that is likely to mean the prestige of the journals. The Education Panel is packed with the academic establishment.
The real test of research quality is the difference it has made to education. Consult the teachers and the policy-makers, neither of whom are part of the assessment panel. If the criterion of usefulness were adopted, we should have some reasonable hope of creating a field of inquiry valued by those whom it is intended to serve.
Professor Alan Smithers is director of Manchester University's Centre for Employment and Education Studies.