As a contributor to The Fourth Education Revolution, Sir Anthony Seldon’s new book on artificial intelligence (AI) in education, I’ve been thinking a lot recently about how AI could help in teaching and learning. Perhaps a few years from now we'll see widespread use of "digital tutors" that help reduce teacher workload by coaching and mentoring learners, building on the likes of Siri and Alexa. But there’s also a darker side that we need to be wary of. Picture the scene not long from now, in a school near you…
It emerges that a pupil has been cheating by submitting homework assignments generated by an artificial intelligence. Another pupil has pranked the school’s IT systems by feeding them false data, resulting in the heating being turned up to full blast on the hottest day of the year. A third has smuggled a near-invisible digital-assistant earbud into their GCSE exams and is found to have been communicating with it by tapping morse code with their foot.
Meanwhile, cyberbullying at the school is on the rise, with realistic faked images, audio and video increasingly being shared of staff and students "saying" or "doing" things that will get them into trouble or bring the school into disrepute. Added to that, digital truancy has become a massive problem with the advent of autonomous agents that can use mum or dad’s voice to ring school and phone in sick…
Technologists tend to assume their ideas and inventions will be used for the public good to make the world a better place. This could be by creating a shop that stocks every product made (Amazon), a jukebox with every song ever recorded (Spotify), or helping everyone in the world to share their thoughts and ideas (Facebook). However well-intentioned they were to begin with, these efforts can sometimes go awry; sometimes people come along who simply have nefarious intentions – just consider the unfolding scandals around the microtargeting of adverts during political campaigns.
It also used to be the case that new technologies took some time to come to market. On 9 December 1968, computing pioneer Douglas Engelbart carried out what has come to be known as the "mother of all demos", showing conference delegates a computer with a mouse, windows, hypertext and much more. However, it took over a decade for anything like this to become widely available with the launch of the Apple Macintosh in 1984. Nowadays the pace of change is more rapid.
Just last week at Google’s I/O conference for developers, there was a moment that I think we’ll come to look back on as another mother of all demos: an AI-based project called Google Duplex. Duplex is straight out of science fiction – a digital assistant that can help with everyday tasks, such as booking a haircut or a table at a restaurant. And when I say help, I mean literally place a phone call to the restaurant and speak to the staff in conversational English. Google-search "Turing test" and you’ll see why this is significant
We’ve recently seen AIs that can produce realistic simulations of people’s voices, putting words into other people’s mouths. We’ve seen AIs that can edit video footage to replace one person’s face with another’s. And Microsoft has built an AI that creates a picture based on your description, eg, “a blackbird sat on a branch”. At its most basic, this means that we can no longer rely on the evidence of our eyes and ears. We’ve also seen AIs trained with the complete works of William Shakespeare that can produce pseudo-Shakespearean text.
But using AI to create things is still novel – most of the AI that is in use right now exists to recognise patterns or objects. For instance, a driverless car needs to be able to recognise road cones, speed-limit signs, cyclists, pedestrians, and so on. Already a whole branch of AI research has sprung up looking at how someone might trick an AI into making the wrong classification and hence the wrong or inappropriate decision. Students from MIT recently reported that they had managed to create 3D-printed objects designed to fool an AI into thinking they were something else entirely. They’ve made a turtle that their AI thinks is a rifle, a tabby cat that the AI thinks is guacamole, and a baseball that it thinks is an espresso.
'Indistinguishable from magic'
The author Arthur C Clarke once wrote that "any sufficiently advanced technology is indistinguishable from magic", and it’s easy to see how some of these applications of AI might come across as magical, or at least unfathomable. We need to avoid this black-box mentality if we are to reap the benefits of AI; instead, we need to understand how it works, and more importantly, what its limitations are. This would help us to see beyond the black box and, yes, the Pandora’s Box to AI’s potential in helping reduce teacher workload and improving learning outcomes.
Going back to those examples of AI behaving badly in education, I think it’s crucial that we take an informed approach to ethics around use of personal data. This is why we at Jisc have co-created a code of conduct for learning analytics in partnership with institutions and edtech companies. It’s also why we’ve built security into the heart of the Janet Network, which connects more than 18 million people in research and education to the internet. If we keep our eyes open, AI doesn’t have to be either a black box or a Pandora’s Box, and we can make the most of it to enhance teaching and learning.
Imagine asking a classroom version of Alexa to help with a burning question you have about space travel. Students at Georgia Tech in the US got a taste of this recently after it emerged that one of their teaching assistants, Jill Watson, was actually an AI programmed to support their learning. After all, there was a time when your smartphone didn’t even exist; can you imagine life without it now? The same will happen with AI, there’s no doubt about it. It’s all rather exciting indeed.
Martin Hamilton is Jisc's resident futurist. He tweets @martin_hamilton