A wealth of national statistics (page 25) reveal each college's record on achieving funding targets, student numbers, drop-out rates, qualifications, contribution to national education and training targets and value for money. To these six shall soon be added student information on gender, age, mode of study, qualification level and a year-on-year record of improvement or slippage.
Coupled with the mass of exam data and inspection reports on every school and college, surely we ought to be getting closer to achieving a definitive picture of standards and quality? Or perhaps not. The problem is that the more information we collect, the less clear is the snapshot assessment it provides. This has been the constant battle-cry of an army of researchers and statisticians: you cannot accurately quantify colleges and circumstances which are so diverse. And even when precise information is gathered, presenting it selectively in raw league tables is simplistic.
That is not to say the aim of the exercise is not worthwhile. The tax-paying public has a right to evidence showing its cash is spent wisely. Colleges need management information benchmarks and to track students' progress. Politicians and administrators too need to know whether or not the system is achieving and, if not, where to intervene. The publication of the performance indicators persuaded dilatory colleges to root out weaknesses in their college enrolment and monitoring systems. But it has been a painfully slow process.
The figures out this week are for 1994-95, and still missing data from more than one in 10 colleges. So for now they must carry various health warnings. Nevertheless, there is enough information to begin to make cautious comparisons between colleges. And in time these performance indicators should improve.
A glance between the performance indicators and relevant exam league tables shows that those who achieved their qualification aims are not necessarily the ones who came top of the league. Indeed, it might be argued that the latest indicators confirm some of the doubts about exam league tables. The new performance indicators may even bring further education a step closer to performance measured in a clearer "value added" context and to locally set goals which are realistic.
Their publication does, however, raise the question of who they are meant to serve. The FEFC sees them as a management tool, ministers speak of accountability. There is a danger that performance indicators that try to do all things for all people will end up doing nothing properly.
To reduce the data mountain, colleges may even want to argue that these broader, if less glamorous, measures of strengths and weaknesses should supersede the exam league tables. But neither the Government nor the Opposition seems in any mood at present to reduce consumers' access to such information about schools and colleges, however unfair or imperfect the comparisons they present.