Technology has moved the exam goalposts. So what are we actually trying to assess?
A team of Colorado University researchers has developed a program that is claimed to mark essays as competently as a human, which reminded me of our work at the Qualifications and Curriculum Authority on the implications of information and communications technology (ICT) for examinations.
The issues are complex. Many rightly point out the need to recognise and reward the new ways of working, such as collaborative writing and joint research projects, and the multimedia authoring and modelling skills which are supported by ICT. Others claim the regulations and mechanisms which underpin the examination system inhibit the take-up of ICT in schools. If pupils are taught to use word processors to organise their thoughts , how will they respond to paper-based exams?
We have also heard about the opportunities for improving the exam system offered by ICT. Suggestions include using assessment tools which adapt to the users' responses so that they work at appropriate levels. It is also suggested that interactive simulations are used to test understanding in a practical way. I have also seen impressive demonstrations of how ICT could, through the Internet, provide "anytime, anywhere" assessment using encryption software to guarantee security.
There are also concerns about the threats that ICT poses. A central feature of any qualifications system is the need to identify individuals' work and to make reliable judgments on how that work was produced. The ease with which ICT allows pupils to retrieve and use information becomes a weakness if it leads to indiscriminate cutting and pasting and a lack of authenticity.
A fictional example makes the point. Sarah is investigating how rivers change from source to sea. The class has collected data on the local river. Sarah enters all this data into a spreadsheet to generate graphs and check for correlations. She chooses suitable axes, prints her graphs and wonders whether the relationships she has identified are true for other rivers. She searches on the Internet for references to major rivers and locates a number of relevant sites.
She prints out an academic paper on the river Nile. The language used is beyond her so she copies the text in electronic form and runs it through a precis program. This gives some useful information for her report, most of which contradicts her findings. She wonders whether the difference in sizes between the two rivers is an issue. She goes to a pupils' bulletin board and posts a message offering to trade her findings with other pupils' data on rivers in their localities. Of the six responses, two contain data and charts which she can cut and paste into her project, and two others give her raw data in need of processing.
She assembles the project, runs it through a grammar and spell-checker, and removes all uses of the passive voice. The grammar checker tells her that the work has a Flesch reading ease score of 75 per cent. She prints it out.
Most people would agree that Sarah is a capable student with an understanding of what ICT can do. Most would also agree that she needed to know a lot about geography in order make a critical assessment of the information. Some would have reservations about her understanding of the information.
All of this takes us back to the real issues - what are we trying to assess and which assessment method is most suitable? Evidence from other countries suggests our answers to these questions must inform any future changes.
Niel McLean is principal manager for ICT at the Qualifications and Curriculum Authority