Scribbling in the margins is a vital tool in helping students to improve, but miscommunication can warp its impact. Mark Roberts explores the research to identify solutions to five common problems
Every teacher knows that feedback is important, but how many of us truly know what effective feedback looks like?
It seems the feedback we give is certainly not universally effective. Research into such interventions has found that over a third of feedback given by teachers decreases performance. According to Hatziapostolou and Paraskakis (2010), in order to be effective, feedback must be “timely, constructive, motivational, personal, manageable and directly related to assessment criteria”.
So, tick and flick marking, with the odd “good effort” annotation, isn’t going to cut it.
Ruminating on this, I was keen to try to improve my written feedback in particular. There is a wealth of research to suggest that our margin scribbles can often debilitate rather than accelerate. So what are some of the biggest problems with written feedback and what are the possible solutions?
An obvious but important issue is students not being able to read their teacher’s handwriting. Weaver (2006) found that only 37 per cent of university business students felt the feedback they received was “written clearly” and “easy to read”. Other studies have also shown a student preference for electronic over written feedback.
If, like me, you have a scruffy scrawl, or you cross-mark with colleagues who have unintelligible writing, you might want to consider the time wasted deciphering during dedicated improvement and reflection time (Dirt). Would typed whole-class feedback be more useful?
What seems, from a teacher’s perspective, like a precise and helpful comment may actually confuse students. One study highlighted the misunderstandings caused by a frequently used annotation in humanities subjects: “Too much description; not enough analysis.” Instead of encouraging improvement, the vagueness of this comment provoked further misunderstanding – nearly half of students interpreted it in a different way from how their tutors intended.
Don’t assume that students understand generalised statements, even if they “really should know by now”. Use common weaknesses in students’ work as a sign that they didn’t grasp concepts during instruction. In the case above, I’d model the difference between the two terms before asking students to make improvements.
Errors and misconceptions
Researchers from the Education Endowment Foundation and the University of Oxford recommend that “careless mistakes should be marked differently to errors resulting from misunderstanding”. Why waste time writing “use capital letters” where a confident writer has sloppily written “botswana” ?
Errors, by contrast, can be based on deeper misunderstandings. Marking an error as incorrect – such as confusing tendons and ligaments – without a prompt was found to be counterproductive because students didn’t “have the knowledge to work out what they had done wrong”.
Consider a student’s prior knowledge and your intentions when picking out errors and mistakes. Will a maths teacher be more interested in the answer being right or the correct method being used? Be aware that students might make further errors when correcting work: research suggests that learners are strongly motivated to amend work by teachers’ comments, even when they are unclear what they have done wrong in the first place.
Purpose and audience
With written feedback, it’s essential to ask two simple questions: “Why am I writing comments?” (purpose) and “Who are these comments for?” (audience). The answers may not be as obvious as they seem.
Are your comments purely intended to narrow the gap of understanding, or are they written with unhappy parents, senior leaders or Ofsted in mind? Teachers often mark for themselves as well, as a way of helping to summarise the work.
Yet, research shows that comments like “evidence of some wider reading shown” – intended to indicate disappointment at a lack of depth – can have the opposite effect, convincing students that they’ve met this part of the assessment criteria.
If you must mark for a wider audience, make explicit the parts that are relevant to your students. But apart from annotations for standardisation, writing feedback in this way is a sign that it has lost its real purpose, which should be improving the student’s skills and understanding.
Considering success criteria
Written feedback that diverts from the success criteria frustrates students and prods them into improvements that have little overall impact. This might be a history teacher who mainly red-pens an essay with Spag errors, without offering advice on the line of argument – which is worth far more marks on the mark scheme.
Conversely, obsessing over success criteria for a particular task can detract from overall performance by encouraging students “to focus on the immediate goal and not the strategies to attain the goal”.
Think about the balance between helping students to improve on the current task and future tasks. Thompson (1998) has cautioned that task-related feedback doesn’t translate well to other tasks. Using written feedback more effectively requires a mindset shift, from focusing on improving the work (“wrong word used in opening sentence” ) to improving the student (“thesis statement lacks clarity – use the model from the last lesson to adapt your opening sentences” ).
Mark Roberts is an assistant headteacher in a secondary school in the South West of England
This article originally appeared in the 12 July 2019 issue under the headline “Written feedback could do better”