Measure of change

Does the teacher who goes on a course for heads of department come back and work more effectively? More to the point, do his or her pupils become better learners?

These are questions being grappled with by Nigel Bennett and Bob Smith of the Open University's Centre for Educational Policy and Management, as they attempt to devise, through their IMPPEL project (Impact on Practice of Professional Development in Educational Leadership and Management) a way of measuring the classroom-level results of educational management training.

Teachers and heads have always wondered whether courses actually change anything. Now, though, as management training evolves into a career-long structure of continuing professional development (CPD), the question is being more pointedly asked.

As Nigel Bennett says: "The Teacher Training Agency is starting to say that if you get funding for professional development, then you have to show that you are assessing its impact on classroom practice."

The task looks difficult, to say the least. (Bennett talks of "an enormous set of cans of worms".) Many of them are discussed in the paper on IMPPEL which they presented to the British Educational Research Association conference this summer.

Should we look for a measurable classroom improvement in the short term? Or can we somehow include subtle longer-term effects? And even if classroom results are better after the head of department has embarked on training, how can you say that this is the only factor driving the improvement?

Bennett and Smith started by asking local authorities, higher education institutions and training and enterprise councils what they were doing already to provide and evaluate CPD in educational management. Although the overall response rate was rather disappointing (little more than a quarter in the case of local authorities), it has provided helpful factual data, and Bennett and Smith think they have detected some trends, summarised here.

* While some authorities are doing a lot of CPD, others appear to be doing none. Bob Smith says: "The range went all the way from authorities which were unable to complete the form up to those which were providing more CPD than the form had room for."

* Where there is CPD, it is likely to be unco-ordinated, with no one in overall charge.

* Any CPD is unlikely to be evaluated for its impact on professional practice (as opposed, say, to being evaluated for client satisfaction). The paper says: "Only approximately one half of the LEAs and higher education institutions which provide CPD in ... educational management or leadership even claim to attempt to assess its impact upon the practice of those for whom it is provided."

The next step for Bennett and Smith is to investigate particular cases in depth by interviewing the clients. Nigel Bennett says: "We'll take a number of educational managers of different levels of seniority and talk to them at great length about CPD. We'll work out from them to the prov-iders and to their colleagues."

At the end, they hope, there will be a method which can be used to evaluate the classroom impact of any CPD activity. Teachers will, on the whole, wish them well in it, for as Nigel Bennett says: "There is a strong sense that teachers approach a course in the spirit of 'How can I use this on Monday morning?'" These same teachers, though, would probably be disappointed if there was no attempt to look at the deeper, life-changing effects.

Log in or register for FREE to continue reading.

It only takes a moment and you'll get access to more news, plus courses, jobs and teaching resources tailored to you