Did you fall for it? Were you one of the millions of people who took a personality test via their friendly Facebook news feed?
If not, then there are countless other ways – on countless other platforms – that you could have innocently handed over copious amounts of information about yourself to third parties via technology. The Facebook and now-closed Cambridge Analytica example above was just one very public illustration of a much broader trend of data harvesting by technology companies.
Clearly, we don’t understand enough about emerging technologies, such as artificial intelligence (AI), and we don’t understand enough about why our data is so important and financially lucrative for companies such as Facebook.
We are becoming more aware now that if something is free, then we are the product. But teachers should be aware of much more: about the potential use of AI, the importance of human intelligence and about how their role as educators will be affected depending on how these two things coexist.
Teachers must be involved in the way that AI could be used in teaching, training and learning. Because if they aren’t, they’ll fail to recognise where their own value as educators lies – and that could end their role altogether.
Technology and machines will revolutionise how teachers work and how children learn. But as Martin George wrote in Tes last month (“Will artificial intelligence ever transform our schools?”, 18 May), we are only at the beginning of this journey in education: we don’t yet know the plot, let alone the ending. And it is tempting for teachers to be passive in the process of drawing up the narrative: to opt out, to think it is never going to happen, to ignore it because we don’t understand it or don’t want to understand it.
But AI is already in schools and it will become more prevalent whether we like it or not. I worry about the naysayers who doubt this reality because they may prevent teachers and learners from gaining an active voice in the development and use of AI that will assure its ethical deployment – and the future of the teaching workforce.
That last comment is not scaremongering. If any teacher reading this article has concerns about AI taking their job, then the safest way to ensure that this does not happen is to get involved in the debate and decision-making about how AI should and should not be used for educational purposes. Be curious about what AI is and what AI is not, what it can do and what it cannot, and how that impacts what and how we teach.
Essential to doing that is to explore what we mean by intelligence, which aspects of intelligence are automatable with AI, and which aspects of intelligence are not and may never be. Because the good news is that human intelligence is about a great deal more than the automatable intelligence that is currently reproduced by smart technology.
Focus on academic intelligence
Human intelligence is a complex, interwoven mesh of elements that interact and develop over our lifetimes.
Academic intelligence is the focus of much of our education today, partly because it is what we often think of when we refer to someone as “intelligent”. It includes knowing facts but goes further − the use of that knowledge of facts to understand the complexity of the world and to act in practical ways as a result.
Academic intelligence is used in all sorts of contexts, not just in a formal classroom, from the ability to follow a recipe to planning an event or a holiday.
Meanwhile, social intelligence reflects how social interaction forms the basis for our individual thoughts. It also includes the communal intelligence we use in collaboration with others. It enables us to work together and to be more effective when we do.
The other five levels of intelligence are less straightforward to pin down because they are about what goes on inside ourselves. These are different types of meta-intelligence (“meta” meaning “referring to itself”). We all have them to varying degrees but they must be nurtured just as much as academic and social intelligence.
First, we need to know about knowing, because we can only make meaningful judgements if we understand what knowledge is. What does it mean to know something? And what exactly constitutes good evidence of that knowledge?
Next, we can use that understanding of knowledge to interpret our own ideas so we can be sure that our interpretations are grounded in good evidence about the world.
Then emotions come into play because intelligence cannot be human without them. We need to recognise our emotions and those of others so that we can regulate our feelings, behaviour and motivations in relation to others and for the context we find ourselves in.
Meta-intelligence also has a physical dimension, because we use our bodies to interact with the world. This aspect of human intelligence enables us to recognise when our instincts need to be followed and when they can be ignored.
The final element of interwoven human intelligence is having an expectation of success that is based on meaningful and accurate judgements about our own knowledge and understanding, our emotions, motivations and personal context, along with an understanding of the knowledge, understanding, emotions, motivations and contexts of the other people who are part of our lives. We need to know our own ability to succeed in a specific situation and to accomplish tasks alone and with others. This is the most important element of human intelligence and it is highly connected to the others.
So, which of these intelligences will be impacted by technology?
We often imagine that academic intelligence is the same as human intelligence, and this is why we risk handing too much of our humanity to AI – because AI is quite brilliant at this sort of knowledge.
As for social intelligence, there is general agreement that social interaction is not one of technology’s strengths: humanity has left the machines far behind.
And the meta-intelligences? It will come as little surprise that AI is terrible at them.
This is all good news for teachers because, if we get things right, their human skills and expertise as educators will be increasingly in demand as the key ingredient in meeting the nation’s lifelong learning needs.
However, to get the implementation of AI “right” we have to decide what we wish to prize and value in our education systems, and this is not as straightforward as it should be.
A key problem arises because the very same psychology that is at the heart of our knowledge-based curriculum is also at the heart of the AI systems such as Alexa, Siri, Google translate and IBM’s AI question-answering knowledge system, Watson.
These systems have been supercharged by a perfect storm of readily available data in large amounts, cheap memory and computer processing that can be accessed from almost anywhere, and AI algorithms that can learn very quickly without ever forgetting.
The result of this perfect storm is that we can now build AI systems that can scoop up the knowledge-based curriculum in no time at all. These systems can recall precise aspects of this knowledge curriculum on demand and communicate it to others as required in a number of different ways, including by voice, image and text.
Many educators place considerable faith in the design of the knowledge-based curriculum. I can certainly understand this. It is a soundly designed approach based on well-tested science.
However, the rigour of the knowledge-based curriculum’s design and the clarity of its structure are also the reasons why we can build AI that can learn it so successfully.
It is not that the knowledge-based curriculum is wrong per se, the problem is that it is wrong for the 21st century. Because now that we can build AI systems that can learn well-defined knowledge so effectively, it’s probably not very wise to continue to develop the human intelligence of our students to achieve this same goal.
The question that we must now address, then, as a matter of urgency, is: should we increase the sophistication of what we teach our children? Should we insist on a curriculum that extends beyond knowledge that is easy to test so that we better develop the elements of their human intelligence that we can’t automate, including meta-intelligence − their knowledge and understanding of knowing? Should we aspire to an intelligence-based curriculum through which we develop a more sophisticated relationship to knowledge about ourselves as well as the world around us?
Educators who point out that one needs a certain amount of knowledge and understanding about something in order to develop a more sophisticated relationship to that knowledge − to probe and to interrogate it − are correct. However, not only can computers learn this academic knowledge but they can also teach it in an individualised and efficient way.
I wonder, therefore, if we stick rigidly to a knowledge-based curriculum, how long it will be before someone controlling the purse strings asks why we need human teachers at all. If we continue to focus on teaching academic knowledge, I fear that this will become a reality all too quickly. It is a dystopian view, certainly, but if we can’t evaluate what’s important in education beyond the knowledge-based curriculum, then we risk handing over too much to artificially intelligent systems.
What happens then? Will schools employ a team of bouncers and childminders to manage pupil behaviour while the machines teach? The dystopian scenario of robots teaching students is not one any of us would want to entertain but that possibility is real.
It is therefore urgent and important that we do two things.
First, we must focus our education systems on the aspects of intelligence that enhance human development in ways that are crucial to society and that a machine simply cannot teach – things like social interaction, human empathy and understanding, for example.
Second, AI should be used to help us enhance our human intelligence in the areas where AI is weak. For example, it is difficult – probably impossible – for AI to achieve collaborative problem-solving. Different AI systems can work together but they are not able to interact socially, nor are they able to justify their decisions: two key requirements for good collaborative problem solvers.
AI can, however, support humans to learn to be better at working together to solve problems. AI can help with adaptive group formation and expert facilitation. It can also engage human students in working together, for example, to teach an AI student (see Betty’s Brain, bit.ly/TeachBettyBrain).
You may not believe me. You may not agree with me. But I am certain that we must ensure that educators have a voice in the development of the educational applications of AI, because the futures of our children depend on us getting this right.
Rose Luckin is a professor of learner-centred design at the UCL Institute of Education. Her book, Machine Learning and Human Intelligence: The future of education for the 21st century, is available now