In the 21 September issue of Tes, Harry Fletcher-Wood and Sam Sims question the evidence behind the government CPD guidelines. Here the authors of the study that those guidelines were partly based on respond to the article.
Twenty years ago, reviews of research and meta-analyses would have gone unnoticed by most teachers. In 2018, teachers are not only taking notice, they are also questioning both their methodology and their applicability. This is progress.
One such challenge is mounted by Sam Sims and Harry Fletcher-Wood in the 21 September issue of Tes (article free online to subscribers). They single out the Developing Great Teaching (DGT) review in their critique and, as the authors of that review, we both welcome the scrutiny – that’s how science progresses – and we challenge a number of the points they make.
Firstly, we don’t recognise the "consensus" view about CPD that they assert and we certainly don’t accept that DGT is representative of it. Their consensus is an artificial construct assembled by selectively including or excluding features from a number of sources and claiming they are all the same; an approach, by the way, which the systematic review process is explicitly designed to prevent.
Different definitions of CPD
This is well illustrated by their representation of the Standard for Teachers' Professional Development, as compared to the actual Standard:
Sims and Fletcher-Wood's version of the Standard:
“The.. Standard for Teachers' Professional Development. These reflect something of a research consensus that CPD is more effective if it:
- Is sustained;
- Is collaborative;
- Has teacher buy-in;
- Draws on external expertise;
- Is practice-based.
The actual Standard for Teachers' Professional Development
Effective teacher professional development is a partnership between: headteachers and other members of the leadership team; teachers; and
- Providers of professional development expertise, training or consultancy. In order for this partnership to be successful, professional development should:
- Have a focus on improving and evaluating pupil outcomes;
- Be underpinned by robust evidence and expertise;
- Include collaboration and expert challenge;
- Be sustained over time;
- Be prioritised by school leadership.
Working with available evidence
Second, reviews are intended to provide a high-level map or overview of a field of research. They are a broad guide and respond to broad questions like “what are the common features of CPD most likely to positively impact on learner outcomes?”. They are not prescriptions or formulae, so DGT found that no individual or combination of characteristics was common across the most reliable reviews, studies and claims. What was common was the careful alignment of CPD activities and experiences with participants’ goals for their pupils. In our estimation, Timperley would receive a high Toolkit rating as a single systematic review, even though the included studies vary in quality.
Third, reviews bring together a diversity of evidence selected against rigorous quality thresholds and relevance standards. This was how DGT, a systematic review of reviews, was conducted, though we accept that the public reports of the review – commissioned for a practitioner and policy audience - provided little detail of the methodology (an omission corrected in our recent blogs).
Sam and Harry’s critique boils down to one point – the review includes research studies that were not randomised controlled trials (RCTs). The place of RCTs in education research is hotly disputed (see for instance Dylan Wiliam’s critique). Even accepting the value of RCTs, they remain rare in education generally and rarer still in studies of teacher professional development, which, like education leadership, is a complex, multi-variate and expensive context for RCT study design.
More research needed
We, like the authors, would welcome more trial studies but, unlike them, we believe that we have an obligation to work with the best evidence currently available on the simple grounds that knowing something is better than knowing nothing. We acknowledge that the quality of the underpinning research is an issue. We identified 980 possible reviews and literature overviews: 115 made it through initial screening, with 46 then reviewed in depth. Only nine of these contained relevant and sufficiently evidence-based data linking CPDL and learner outcomes, so we mapped the claims from each of these against a pre-specified set of criteria to identify those which were consistent and rigorous.
Finally, our colleagues assert that they have a better approach, though we are mystified how it can be better when it clearly fails the test to which they have put our review. To support their claim, they cite two studies relating to coaching, which they offer as examples of strong correlational research. These, they say, are conceptually underpinned by research demonstrating causal agency.
However, the basis for choosing the coaching studies is not revealed and the milieu for the "causal" studies is psychological research that the authors apparently believe is more plausible than, say, that conducted in education. We take a different view of psychological studies as they are commonly conducted in near laboratory conditions with very small samples (often university students). We prioritise field trials in an education (preferably school) context so, for us, the weak ecological validity of psychology studies is a serious issue.
We very much welcome this more sophisticated and questioning approach to education research and its value to the sector. It’s a further sign of progress that part of that debate is being conducted in the pages of the Tes. However, these are complex issues that we can only touch on in this space. Harry and Sam have addressed this problem by supporting the article with a more detailed paper. We have done likewise with a series of blogs, all of which are curated on the CEM at Durham University website and written by Rob Coe and Steve Higgins, of Durham University, and Philippa Cordingley and Paul Crisp from CUREE.
Professors Steve Higgins and Rob Coe of Durham University, Philippa Cordingley of CUREE and Professor Toby Greany of the UCL Institute of Education