It's spring, and while a young man's fancy lightly turns to thoughts of love, teachers will be thinking about their schemes of work for the autumn.
If you're trying to embrace the new computing curriculum or just revamping your programme of study then you might be wondering where to start. One of the things I like about computing is that all of the topics (aside from the e-safety elements) form a nice coherent whole. The students enjoy seeing how all of the areas of the curriculum are linked, and it also makes it simpler for teachers to plan the course and reinforce students' knowledge because:
- you can seamlessly flow from one topic to another in a way you couldn't do with ICT – the previous curriculum was a bitty mess of "today we're making a spreadsheet but next lesson we're reviewing a web-page".
- you can frequently revisit previous topics – e.g. as lesson starters – in a way that's relevant and helps students to remember and practise them.
If you are new to computing, I have produced a curriculum map of computing topics to show you how they are linked. I designed this for the KS3 curriculum, and it's based on the National Curriculum, but it's also suitable for KS4 students.
So where should you start? In my opinion, an understanding of the internal representation of data – i.e. how different types of information are stored inside the computer – is the glue that binds together the whole of the computing curriculum, so that's where I would begin.
Specifications for KS4 qualifications often start with topics such as units of storage and storage media. Some courses even require students to know how individual bits are stored in solid state or optical media, for example, and most students have heard of bytes, even if they're not sure how much information a byte will store.
But what's in those bytes? And why are sound and video files so much larger than spreadsheet or database files? Can I make those files smaller? If I want to write a program to manipulate an image, how do I go about it?
As early as possible I impress upon students the idea that computers are just devices for processing numbers, and everything that you want a computer to store – from text to videos of dancing cats and Justin Bieber songs – needs to be represented as a series of numbers. But what do the numbers actually mean?
You can go into the relative merits of bitmap and vector graphics and the intricacies or palettes and bit-depths, or debate wave vs. MIDI files, but I start with the idea that bitmap images store the colour of – i.e. the amount of red, green and blue required to reproduce each – pixel, and sound files store the amplitude (i.e. volume) of samples/slices of the sound wave at each point in time.
Not only does this demystify the process of writing a program to, say, manipulate an image or sound, but some students are genuinely stunned that their 8-megapixel camera saves 24 million numbers into a file every time they press the button.
We can do the same with the use of ASCII to represent text, and even the use of binary to represent characters. Understanding how text is stored enables us to quickly see how encryption works, for example.
If someone explained these ideas to me, my next question would be, "Yes, but how are these numbers actually stored in the computer?" – cue an explanation of binary; probably the most important topic in the whole of computing.
As an aside, I find that some weaker students find the leap from the concept of binary to using 0s and 1s a bit too far, but I've discovered that using a binary abacus can help to bridge the gap.
Once you know how data are stored, it allows you to think about how they might be transmitted. Knowing that everything is stored as a binary number allows you to make sense of serial and parallel transmission, and also understand ideas from bus width and word length, through the use of parity and bitwise operations for encryption, to things like subnet masks. It also allows you to think about bandwidth, and why it might be useful to compress the data in a file to make it smaller.
Hopefully you can now see that, once you start thinking about the topics, the scheme of work practically writes itself. If you add to this some Boolean logic, for example, you can then combine that with the binary to give you bitwise logic, or link binary and logic circuits with the half-adder and binary addition, and once you've done bitwise logic you can use that in programming to convert denary to binary, or combine it with ASCII codes to use EOR for encryption, and so on.
You can also use any of these ideas in your programming work – e.g. you can use ASCII codes to produce Caesar shift ciphers (as previously mentioned), or you can create binary flags to convert binary numbers or change state (as in my seven-segment display, described in my article on the use of arrays). An understanding of the represention of data is also key to understanding data types in programming and databases – e.g. the difference between a byte and a long integer, or a float and a long.
Lots of schools seem to have KS3 schemes of work that are still old-school ICT with a bit of Scratch (or other programming) thrown in, and maybe some HTML, without the foundations of computer science that I have just described. My advice is to be bold, and get stuck in with the theory because it helps students to understand and link together everything else that you teach, and gives the impression that computing is a coherent subject in the same way that, say, maths is.