Skip to main content

Are we right to welcome AI into schools?

AI is already taking root in education - and offering exciting, efficient new ways of working. But we need to be aware of potential problems

AI in schools

We all know that ed tech is big business (government figures say the industry is worth around £170 million to the UK economy) and artificial intelligence is set to be a big part of it.

In fact, it may already a bigger part of your school life than you realise.

A recent report from Nesta states that AI tools are “already being used in schools”, whether it's students using interactive online tools like Mathigon or by Ofsted.

Quick read: Artificial intelligence could ease marking burden

Quick listen: The truth about screen time, tech and young people

Want to know more? How can AI improve your college?

But how well do we actually understand these systems?

Many of the apps we use at home or school – like Manga High, MathleticsCentury or Class Charts, as well as Facebook and Google – statistically analyse past actions and answers; these are known as training data and help AIs to adapt or predict future outcomes.

This is weak or narrow AI; statistical pattern-recognition software.

No sentience, but statistics, replete with the vulnerabilities we learned about in GCSE maths.

We don’t have strong or general AI yet, that’s the movie version (think Holly from Red Dwarf).

A lot of AIs essentially just draw lines of best fit through data (or, to use the fancy term, fit a hyper-plane to an n-dimensional dataset).

Imagine the AI plotting lots of crosses on graph paper, then drawing a line of best fit through them.

If the graph were of attainment against age, for example, the AI might flag those students below the line as having made below average progress for their age, and refer them for extra support.

Others simply create flow charts to be able to classify things, like predicting what grades students are likely to get based on their track records.  

Not human

AIs work rapidly and can spot patterns that humans cannot see. In 2016, AlphaZero (Google’s AI) played a winning move in the board game Go that a grandmaster described as "not a human move."

While this is great for gaming contexts, these AIs could also make non-human decisions about our human communities if we give them permission.

artificial intelligence in schools

Since 2018, Ofsted has been using AI to make predictions about schools’ performances in inspections.

The for-profit Behavioural Insights Team that sells the algorithm to Ofsted say that it can “ identify six times as many "inadequate" schools as inspections alone”, thus “freeing up Ofsted employees to work with other schools”.

But they don’t say which factors influence outcomes the most, “partly because they don’t want schools to know” (and game the system perhaps?), and also “partly because it’s difficult to know exactly how these algorithms are working”.

Problems with training data

Most AIs need to be trained with examples. They identify patterns from questions answered by students, and can use these to help adapt learning pathways and predict results.

But this model presents several possible issues:


Many adaptive personalised learning platforms claim things like Summit’s “72,000 students” or Third Space’s “7,000 students a week” to bolster confidence that they have learned from plenty of student answers.

But what if most of those were answered by a minority of students, who love working online? Then all our students would be pushed along similar learning paths as those few tech-savvy students.


If an education AI has mainly been used by inner-city schools, for example, it will not be as effective if used for predicting outcomes for students at other types of institutions.

IBM’s medical AI suggested unsafe treatments for cancer sufferers, due to being trained on fake made up training data (because they couldn’t get enough real training data).


How do we know the data is accurate? In 2011, 206 teachers in Washington DC were fired based on AI recommendations.

Investigations later revealed the abnormal student scores influencing their dismissals were likely due to cheating, enabled by other staff (who perhaps didn’t want to get fired).


AIs will blindly identify patterns in the data, oblivious to ethics. In 1986, a medical school in London was found guilty of racial and sexual discrimination.

It had used an AI to screen applicants and the AI had quickly learned from its human predecessors’ decisions (training data) to more readily reject people with foreign-sounding surnames and birthplaces, as well as women.

Data management

What, where and how is our data stored, and what happens when it leaves our control? Could a tired staff member inadvertently attach too much information to an email, as at Boeing (think Excel spreadsheets with hidden columns and tabs and Cambridge Analytica)?

AIs can be incredibly effective in all kinds of areas of education, from supporting children with special educational needs and disabilities, to allowing students to interact with people and places from the past.

We should certainly try to identify and utilise the genuinely useful educational AI products being developed.

But we need to be cautious, or we run the risk of giving away the private data of our students and staff to unsubstantiated and unaccountable black boxes, as well as handing over scarce funds for immature systems that might be the latest incarnation of the hi-tech snake-oil that Amanda Spielman warned us about.

Omar Al-Farooq is a secondary maths teacher and software engineer

Log in or register for FREE to continue reading.

It only takes a moment and you'll get access to more news, plus courses, jobs and teaching resources tailored to you