Why teachers must learn to interrogate AI's algorithms

Edtech innovators must work with teachers to ensure that tech is purposeful, robust, valid and safe, write two experts

Rose Luckin and Carmel Kent

Teachers must work with edtech developers to ensure that new technology is robust and safe for schools, write two experts

Omar Al-Farooq made an important contribution a few weeks ago on the Tes website to the discussion around the merits, and otherwise, of using artificial intelligence (AI) in schools.

His article presented a welcome challenge for edtech companies and the industry to acknowledge that teachers are ready to critically evaluate the advantages and challenges of this piece in the jigsaw of technology use in schools.

It is a conversation that could not have happened 10 or even five years ago. The potential was still being developed, and in most people’s minds it was something "of the future". Few people were seriously considering the applications of AI in teaching and learning – and, in truth, many still aren’t. The principles around AI can be daunting and sometimes frightening. But AI need not be frightening.

One thing cannot be disputed. AI is here, and it is here to stay. Children use it daily, to chat with their friends, to generate music. It influences their choices as consumers. Regardless of whether it appears in the classroom, it is a part of their lives.

AI in education is a complex creature. It must not work alone but should involve human experts throughout all its life cycles: in its design, in decisions about data collection and anonymisation, in the choice of the modelling techniques, in the computation of meaningful features, and the evaluation of insights. Teachers are among that list of experts, with whom the technology companies must engage.

AI in education

AI won’t make decisions for the teacher, but it can support humans to make informed choices about other humans – the learners.

Teachers need to learn how to ask questions about the quantity and quality of the data used to train the AI algorithms, about the transparency and explainability of the statistical models used by the AI and about possible biases and limitations of the predictions or recommendations made by it.

Of course, AI can be misused. All technology can. But AI done right should be designed to safeguard its users. This means that the data used to train machine-learning AI should be well communicated (if it cannot also be shared) with its users, and be laid out in its specific context, with possible biases and limitations highlighted. All this must also adhere to standards similar to an academic peer review process.

GDPR guidelines must be followed, detailing the data storage and usage procedures, and commonly used models' evaluation criteria (such as accuracy, recall and recognition) should be clearly reported and made available to the schools and parents.

Educators, researchers and innovators bear the heavy weight of responsibility to oversee a process that keeps children as safe as possible, while ensuring they are best equipped to function as 21st-century citizens.

It is the responsibility of edtech companies', policymakers, teachers and parents to make sure that learners are equipped with the tools to evaluate evidence provided to them, so they fully understand and can make informed decisions about how to consume and provide data.

This is what we do on the UCL EDUCATE programme. Edtech innovators work with researchers and educators to ensure that the technology produced is purposeful, robust, valid and safe. We need educators and AI developers to work together to develop the best AI systems for use in education. The AI developers can learn from the educators about teaching and learning and the educators can learn from the developers about AI.

Omar’s article was so important because, as a teacher, he is among the educators spearheading the discussion and debate already taking place within the industry, and which now needs to happen in schools.

We know that teachers are ready to critically evaluate the many uses and applications of AI and we encourage and welcome them to this process. It is vital that they contribute to the discussion.

Professor Rose Luckin is professor of learner-centred design and director of UCL EDUCATE. She is a leading expert on AI and edtech development and a founding member of the Institute for Ethical AI in Education. Dr Carmel Kent is senior research fellow on the UCL EDUCATE programme

Register to continue reading for free

It only takes a moment and you'll get access to more news, plus courses, jobs and teaching resources tailored to you

Latest stories