How effective is computer science CPD?

Stephen Trask
5th August 2019 at 09:00

CPD -  computer science

In the world of computer science, concepts, knowledge and techniques change very quickly.  Five years ago few teachers would have mentioned, let alone taught, the concept of artificial intelligence.  Last term my key stage 3 students were debating the pros and cons of self-driving cars and probability in machine learning. 

In order to get to the point of being consistently good, passionate, inspiring computer science teachers, we need top-notch CPD. But what does this mean? What is the purpose of CPD? What are the barriers?

Helpfully, there are several researchers and analysts who have measured the impact of CPD on student outcomes.

The impact of teacher CPD

It may be helpful now to briefly summarise how we measure the impact of a particular intervention. We could measure the impact of teaching a computer hardware unit bilingually in a mixture of English and Italian to a certain group of students. To measure the impact of this, we could:

  • Take a class of 30 Year 8 students, of whom 80 per cent are native Italian speakers, but who are being taught an English curriculum in an English-speaking school.
  • Find another class of 30 Year 8 students, of whom 80 per cent are native Italian speakers in the same situation as the ones above.
  • Perform some sort of baseline test upon them in order to establish that they are very similar in terms of cognitive ability and linguistic skills. (We are assuming in this case that they are very similar.) 
  • Identify a student within the group who attains an average grade within the baseline test, whatever that average may be for the class. We shall call this student Bob. Bob is placed at 15th in the class of 30, because he is the average (15.5 in fact, but we are not going to dissect Bob, so let’s keep him at 15th).
  • Treat one group as the control group, who have no intervention.
  • Treat the other group as the group (containing Bob) who receive the bilingual intervention, and instigate this new method with them over x number of weeks.
  • At the end of the intervention period, perform another test on both the intervention class and the control class, in order to establish if the intervention group have, in fact, improved or not.

 

Upon analysis, you might find that, yes, bilingual teaching within this context led to higher scores in the intervention group than in the control group. We can now calculate the effect size of this intervention – how well the intervention improved the average performance of the group.  In other words, by how much did Bob improve?

If the effect size was 0.4, then we could move Bob from his intervention class into the other control class, and instead of being placed at 15th in the class, he would now be placed at 11th in the class. This means that he is no longer totally average – he is performing at a higher level than most of the other students in that class. If the effect size was 0.9, Bob would now be placed at 6th in the control class, well ahead of most other students.

Effect sizes are, therefore, useful gauges for the impact of any intervention; with the caveat that we do need to be wary of confusing correlation (where we see patterns emerging and can draw conclusions as to why the patterns emerge thus) with causation (where we assume that a must have caused b).

Most researchers consider an effect size of 0.2 to be small enough to be trivial. So if one measures a particular intervention on a group, and the effect size is no bigger than 0.2 then the intervention is considered to have really not been worth the effort. An effect size of 0.4/0.5 is considered to be medium (depending on which statistician you are following), and anything greater than 0.7/0.8 is a large effect size.

So, with this understanding of effect sizes, what types of CPD are we aiming for? Firstly, the assumption we need to work from is that improving the quality of teaching correlates to raising student achievement.  In Visible Learning (2008), John Hattie’s synthesis of 800-plus meta-analyses (a meta-meta analysis, if you will), Hattie found a significant effect size of 0.62 with regards to the positive impact of teacher CPD on student achievement.

School objectives

However, CPD varies hugely in terms of quality (we have all been there) and in what it is seeking to achieve. From a whole-school point of view, the CPD must tie in with the overall school objectives. From a computer science subject point of view, it similarly should tie in closely with the departmental development plan, which itself should dovetail with the whole-school development plan. CPD tends to be ineffective when there is little or no strategy that underpins it, and when there is no demonstration that it is actually used in class. 

In other words, if a two-day Raspberry Pi CPD session in Leeds looks interesting, and it probably is very interesting, then, to be effective, it needs to tie in with the departmental development plan. This might be something akin to developing students understanding of programming GPIO pins or Sonic Pi.  The departmental development plan ought to then tie in with the wider whole-school curriculum plan, which might be to increase creativity within each subject area. 

So assume that the Leeds CPD is a great success, plenty of resources have been gleaned, contacts made and so on, and the teacher has returned to the school bursting with ideas. Another criteria for judging the success of the CPD is that the teacher has a mechanism for demonstrating that they have followed through on the CPD within their lessons. Somebody overseeing CPD delivery within the school should enable this, perhaps via the HOD, perhaps via peer observation or student feedback – but without this mechanism, it is impossible to judge the efficacy of the CPD.  If the teacher attended a two-day Raspberry Pi course six weeks ago, but on reflection decided not to utilise anything that they learned on the course with their students, one could reasonably assert that the CPD was not that useful.

Another thought to chew over when considering subject-specific CPD: Mary Kennedy, in her 2016 paper “How does professional development improve teaching?”, examined the impact of CPD on teaching and on students, and she found that CPD which focused on content knowledge – such as Raspberry Pi workshops – tended to have less effect on student learning than most people anticipated.  Kennedy recommended that CPD should avoid isolated topics, and rather focus on issues that teachers face, providing solutions that are very practicable and usable.  

The suggestion is that CPD that improves a teacher’s pedagogical capabilities – such as classroom management, delivery, theories of learning and so on – would have more impact on student outcomes than sending a teacher on a subject-specific "content knowledge" CPD session. 

Naturally, the ideal solution is to offer CPD sessions that tie in perfectly with the whole-school development plan, that address overall pedagogy, combined with subject-specific content which feeds into the particular departmental development plan.  All budgeted and timed and accounted for. All coming together in perfect synchronicity. Sounds easy, doesn’t it?

Stephen Trask is an Apple Distinguished Educator – one of only 2,500 educationalists globally who has been assigned this status. He oversees technology and is the data-protection officer at a British international school in Rome, and has previously worked in the UK, Asia and the Middle East. He tweets at @Steve_T_online