If you frequent teachers' forums, such as those in the TES Community, you'll regularly see colleagues asking for advice on how to record and monitor "progress". But what does that mean? How can we measure "attainment" or "progress" and what would the measurement look like? Are there SI units for progress?
One view is that, when creating a course or scheme of work for a particular subject, you should first ask yourself what it means to be good at that subject. What does it mean to be good at computing, for example? This will inform your curriculum and tell you what you need to teach, but it will also give you an idea of your success criteria – if someone needs to do A, B and C before they're good, then surely a student that can do A, B and C will be good. Or will they? Is it that simple?
How do you know when you're good at something? Is it when you can do it once? When you can do it consistently? Or when you feel confident in doing it? If you learn A one year, and then learn B the next, have you made progress? Even if B is actually much easier than A?
One of the problems with this approach for our subject is that there's disagreement about what computing is. We've got different ideas about what it means to be good at computing – I've said before that I will feel that I've done my job as a computing teacher if a student leaves my classroom, looks at something and says "I wonder how that works?" However, I've never seen an exam or baseline test that measures that particular skill. In fact, a lot of the "baseline" test that I see measure things that I don’t consider to be computing at all.
We all know that Ofsted wants to see "progress", but what is it? Is it getting better at something, or just getting further through it?
With the old national curriculum, it was easy: you match the students work against the descriptions in the subject-specific document and gave it a number. Or was it that easy? I never really heard a complete and satisfactory description of how to give a student an overall level that didn't include the phrase "professional judgement" or "best fit". Measuring the length of my desk doesn't require "professional judgement" – it requires a tape measure.
You could only really give a meaningful level in a single area of the curriculum – if a student programmed efficiently then they were at level 6, if they didn't, they weren't. Generating an overall level, which is what some schools and parents required, was more tricky. What about if something hadn't been taught at all? What about if a student was a solid level 6 for "Finding things out, Exchanging and sharing information" and "Reviewing, modifying and evaluating work as it progresses", but has never done any "Developing ideas and making things happen"? I was once told by a senior HMI inspector that under those circumstances the student would be level 6 overall – but if the same student had done a small amount of "Developing ideas and making things happen" and were maybe working at level 3 in that area, then their overall level would be reduced. Knowing more reduces their level? Surely that can't be right?
At least the old national curriculum levels indicated a proper hierarchy of skills – students working at level 6, for example, were working at a higher level than students working at level 4. Or, put more simply, the level 6 things were "harder". A level 4 student could "plan and test a sequence of instructions to control events in a predetermined manner", whereas a level 6 student could also "show efficiency in framing these instructions, using sub-routines where appropriate."
The "post-level" world seems to be dominated by teachers (or, more probably, school leaders) that still want to give everything a level, and schools and other organisations are creating their own systems of assessment, such as the CAS Computing Progression Pathways.
What I notice about many of the new "levels" is that they're not hierarchical. CAS give theirs colours, rather than numbers, perhaps to indicate this, but they still use arrows to indicate order and "progress", even though some of the later skills seem to be more straightforward then the earlier ones. For example, "Understands that iteration is the repetition of a process such as a loop" is two levels higher than "Designs solutions (algorithms) that use repetition and two-way selection i.e. if, then and else", which seems a bit strange when Bloom tells us that understanding comes before application. Also, if there are multiple strands in your subject, how do you ensure that the same levels in different areas are equivalent? Is understanding recursion really at the same level as "Knows the names of hardware"?
Some schools have started to use numbers that relate to the new GCSE grade descriptors. In my experience, SLT members tend to be teachers of English or humanities. If you look at the GCSE grade descriptors for English, for example, you can see how that might make sense – they describe a finely-graded set of skills that a student might possess, and you might be able to see or predict whether your student will develop those skills over the coming years. English, though, has a limited range of skills – reading, writing, spelling, etc. - that you can apply to varying degrees.
Compare that with the grade descriptors for computer science – a grade 2 candidate will have "limited knowledge" and a grade 8 student will have "comprehensive knowledge". They're basically saying that the more you know, the higher your grade will be. I recently counted 120 skills that needed to be taught in a GCSE computer science specification. How many would constitute "limited knowledge" and how many "comprehensive knowledge"?
When a student starts a course they will, I would have thought, have "limited knowledge". If you teach 10 of the 120 skills and a student remembers and understands them all, what does that mean? Can you extrapolate and say that they'll know all 120 by the time of the exam and have comprehensive knowledge? But didn't you start with the easier topics? Can you be sure that they'll understand all of the rest? How about a student who understands half of the first ten – can you assume that they'll understand half of the remaining 110? Or that they'll understand fewer because they'll get more difficult?
For this reason, I've never understood why teachers (particularly my maths colleagues) say things like "This is an A* or level 8 topic…" Yes, that may well be a topic that students find more tricky than most of the others, but how does that equate to a grade? The only thing that we can be sure about is that the more questions a student answers, the higher their grade will be – if they answer half of the questions, they'll get half marks, regardless of whether it's the easier half or the harder half. If they answer only the A* questions, then they'll most-likely get a U.
Another issue to consider is the nature of the subject. With some subjects – English and maths, for example – there is a high degree of continuity between Key Stages 3 and 4. With some, e.g. science, the underlying principles are the same, but the content is different, so a student will justifiably have "limited knowledge" of GCSE content for the whole of KS3. Some subjects, e.g. business studies, don't exist at KS3, and some, e.g. PE, are completely different at GCSE level: no-one does a written PE exam in KS3.
If none of the previous or current methods is really ideal, how are we to measure progress? Here's one last conundrum. Is what we see actually the thing that we're trying to measure?
This is a particularly interesting question in computing. One of the things we're trying to teach, for example, is computational thinking. What does that look like? What the students might produce to evidence their computational thinking is a computer program – the output isn't quite what we're trying to measure. One of the other things I've consistently observed is that confident and able programmers tend to make their applications user-friendly, rather than technically complex: again, that's not always something that we always see in schemes of assessment or lists of skills to be acquired.
I had an interesting discussion with my daughter's maths teacher at a Year 5 parents' evening. My daughter had recently moved from a school where they'd learned formal methods for multiplication and division nearly two years earlier than students at the new school. "Yes," said the new teacher, "but we teach understanding first…" Really? Can you teach understanding?
Bloom's Taxonomy tells us that remembering comes before understanding. Anyone who's had (or seen) a baby will know that we learn by observing and copying, and come to understand later. If at all. How many of us can happily use the formula πr² to calculate the area of a circle without understanding why it works? There isn't even agreement about what it means to understand something, yet alone how to assess understanding.
The new national curriculum leaves it up to schools to decide how to assess progress. When making that decision, here are questions that I would ask about the system devised:
- Is it suitable for all of the subjects taught in your school, both content-based and skills-based?
- Is it really possible to give an overall summary of the whole subject?
- If the subject is broken into "strands", what should they be? I break my computing course down into sections such as representation of data, maths for computing, algorithms and programming, networking and information systems, for example. These do overlap, though – e.g. where do you put routing algorithms and compression techniques?
- Does giving everything a number make sense? How do you equate skills from different areas of the curriculum, for example? Is understanding recursion more or less difficult than adding binary numbers? Does a higher number indicate something that is more difficult, or just that it's further through the course?
- Are you measuring what you actually want the students to learn?
- Will students and parents easily understand their results or scores?
- Should students be involved in the process? There is a controversial idea that students are able to assess themselves.
It seems implicit that the will of the DfE was to do away with the use of numbers to describe a student's attainment. The system that my colleagues and I devised is both simple and easy to understand. It resembles the APP method, except in one crucial respect – we don’t convert the results into a number at the end.
For each subject we have a bank of skills that the student might demonstrate. For computing these were based on the ones in the CAS Computing Progression Pathways document (with the apparent duplication removed and some extra bits added). For each "skill", we record whether the student can sometimes do something, or whether they can confidently do it. We can then report that a student can do X this year when last year they couldn't, or that they are now doing Y confidently when last year they only did it sometimes. There's no aggregation – it's easy to record and easy to read. That system might not suit you and your school, but it shows that recording doesn't need to be onerous and you don't need to label every student's attainment with a single number.
Andrew Virnuls teaches computing with the Warwickshire Flexible Learning Team.