Pupils are confiding in AI chatbots - these are the warning signs
Artificial intelligence is often discussed as a classroom tool - for lesson planning, assessment or even homework support.
But research suggests that AI is taking on a more troubling role in children’s development: young people are using it as a confidante, comforter and, in some cases, a substitute friend. For teachers and designated safeguarding leads (DSLs), this raises urgent questions and implications.
For example, a 2024 University of Cambridge study found that many children struggle to recognise the “empathy gap” in AI chatbots. While these systems can mimic warmth and understanding, they lack the depth of human care. Yet pupils often mistake the simulated empathy for the real thing, placing their trust in a tool that cannot truly support them.
A more recent report by Internet Matters, Me, Myself and AI, reinforces this concern. It says large numbers of children, particularly vulnerable pupils, are turning to chatbots for companionship and emotional support. Many described these bots as “friends” or “safe spaces”, highlighting how easily technology can blur the boundaries between tool and relationship.
The AI safeguarding risk
Children who feel isolated or anxious may find a chatbot’s constant availability deeply appealing.
A bot hardly ever judges, always responds and mirrors the tone it is given. For some, that can provide temporary reassurance. But the danger lies in the illusion of understanding. A bot cannot notice body language, escalate concerns or reach out to an adult when a child’s wellbeing is at risk.
If a pupil confides suicidal thoughts to a chatbot, there is no guarantee of a safe response. Some systems may redirect to helplines, but others might miss the cues altogether. It is precisely this gap between what children expect and what chatbots can actually provide that creates safeguarding risks.
This is especially worrying given that the Internet Matters study highlights that those already struggling with loneliness or poor mental health are more likely to use AI as a source of comfort.
Warning signs schools should watch out for
While schools have become alert to the red flags for online grooming or bullying, there are also early warning signs that AI may be playing an unhealthy role in a child’s life. These can include:
- A pupil describing or referring to a chatbot as a “friend” or source of emotional advice.
- A child’s withdrawal from peer groups or reluctance to share online activity.
- A child using language that seems unusually polished, repetitive or “bot-like”, echoing chatbot phrasing.
- A child’s reliance on AI for emotional reassurance, not just for study support.
- A child’s expression of despair or hopelessness tied to conversations online.
These signs are not definitive proof of harm but should prompt gentle inquiry. Schools cannot know exactly what conversations pupils are having with AI systems outside of the classroom but can be alert to shifts in behaviour, language and social patterns.
Practical steps for schools
Beyond this there are some practical steps that schools can take to understand - and engage with - how pupils are using AI chatbots.
1. Bring AI into safeguarding policies
Policies should explicitly reference AI chatbots. Just as schools adapted policies for social media a decade ago, they now need clear guidance on how AI fits within online safety.
2. Educate pupils about the ‘empathy gap’
Personal, social, health and economic (PSHE) education lessons can help children to understand that while chatbots can sound human, they cannot provide genuine care. Emphasising the difference between simulated empathy and real empathy is vital.
3. Train staff to spot the signs
Teachers and DSLs should be briefed on the findings of research such as the Cambridge and Internet Matters reports so they can recognise when pupils may be over-relying on AI for emotional support.
4. Strengthen conversations with parents
Parents need to know that their children may be using AI not only for schoolwork but also comfort. Schools can help by sharing resources, signposting to the research and encouraging open family dialogue.
A new frontier in safeguarding
AI is not just another app or website to block. Its conversational nature means that pupils can form bonds that differ from those with other technologies.
For schools, the implication is clear: safeguarding frameworks must evolve. Just as DSLs learned to navigate the risks of Facebook, Snapchat and TikTok, they must now grapple with AI.
This does not mean rejecting the technology - it can still offer educational benefits, but it does require vigilance, clarity and proactive education.
The ultimate message should be simple: a chatbot cannot replace a trusted adult. When children confuse simulation for care, the consequences can be devastating. Teachers, leaders and DSLs have a critical role in ensuring that pupils know the difference, and in creating a school culture where children seek real human help when they need it most.
Michael Green is visiting professor in the School of Education at the University of Greenwich
You can now get the UK’s most-trusted source of education news in a mobile app. Get Tes magazine on iOS and on Android
Register with Tes and you can read five free articles every month, plus you'll have access to our range of award-winning newsletters.
Keep reading for just £4.90 per month
You've reached your limit of free articles this month. Subscribe for £4.90 per month for three months and get:
- Unlimited access to all Tes magazine content
- Exclusive subscriber-only stories
- Award-winning email newsletters
You've reached your limit of free articles this month. Subscribe for £4.90 per month for three months and get:
- Unlimited access to all Tes magazine content
- Exclusive subscriber-only stories
- Award-winning email newsletters
topics in this article