Imagine this situation: an English teacher notices that many of her Year 7 pupils are struggling with basic literacy skills. She knows that most of these pupils – many of whom are from disadvantaged homes – are going to fall even further behind without urgent support, so she decides to do some research into how best to help them.
After looking at what has worked for similar pupils in the past, she decides that a highly-structured, intensive catch-up programme is the best route to take. The programme with the best evidence behind it is expensive, and the school’s budget is tight, but they find the money by discontinuing an after-school homework club.
The teacher knows the stakes are high so take extra care to make sure the programme is well implemented and that all the pupils who need extra support have access to it. After one term, it seemed like most of the pupils had made good progress. But for the school to continue spending money on the programme, the teacher needs proof that the improvements have come as a direct result of the new approach.
This is why conducting your own classroom research is so important. While evidence from randomised controlled trials give us the best estimate of a programme’s average impact on pupils, it might not always match up with the impact it has in a school. By running a small-scale evaluation, teachers can find out if a specific programme is working for their pupils.
But how do you go about doing this?
There are a range of options open to teachers who want to improve the way they evaluate new interventions or strategies. The Education Endowment Foundation’s DIY Evaluation Guide provides practical advice on designing and carrying out evaluations.
It covers three basic stages:
Identify the question that you want your research to answer
Without a well-framed evaluation question, you’re unlikely to get the answers you need. A simple question could be whether an intervention boosts attainment or not. But another question could ask whether a particular form of an intervention works better than another: for example, whether weekly mentoring is more effective than monthly mentoring.
Decide how to measure the effect your intervention is having
Are you going to use national assessments, for example, or standardised tests? Most importantly, you need to select a measure which is valid (meaning that it measures what it claims to measure) and reliable (meaning that it is consistent over time and context).
Establish a comparison group or ‘control’ in order to understand the impact of the approach you are testing
Children will almost always progress as time passes, but you want to find out whether a new way of teaching means that they progress faster than usual. This means delivering the new approach to one group and then comparing to a group which is not receiving the new approach. We can never truly know exactly what would have happened without the new approach, but establishing a control group gives us the best possible estimate.
Establish your baseline through a pre-test and deliver the intervention
A pre-test helps you establish where your pupils are starting from and enables you to create a fair comparison group. Pupils in your intervention and comparison groups should be starting from the same point. When you’ve conducted your pre-test, then it’s time to deliver the intervention.
Write down what you want to do as well as what you actually did
For example, record how long you plan to deliver the intervention for, as well as how long you actually did. This means that if the intervention is successful then you’ll know exactly what it was you did to make it work.
Conduct a post-test to find out what progress your pupils have made.
You should think carefully about when to do this, considering how long it is likely to take for the intervention to have an impact on children’s attainment. You could also conduct one post-test at the end of the intervention and another one sometime after that to see whether the effect lasts.
Understand the data
Once you’ve finished your intervention and testing you should put all your data into an Excel spreadsheet with columns for the post-test data and a row for each pupil. Then you can use an “effect size” calculator to find out what impact it had.
Doing this will give you new and important information to help make sure that what you’re doing in the classroom is having a positive effect on your pupils. You might find a particular approach had a positive impact, you might not; you might find you want to adapt the programme for next time round.
But no matter what the results, share them. Send them to your colleagues and discuss what you’ve found. Just as with big randomised controlled trials, there’s little point to conducting research if we don’t learn from it.
Milly Nevill is head of evaluation at the Education Endowment Foundation
The 3 August edition of Tes, is a research special, packed full of advice about engaging with education research: how to find it, how to assess it and how to use it. Pick up a copy from your local newsagents or subscribe to read online.