The AI safeguarding risk is here: is your school ready?
A student approaches a member of staff. They say that a photograph of themselves has been altered using AI to make it appear sexually explicit, and that it is being shared in a group chat.
What happens next in your school? Who does the staff member tell, and how quickly? Does the school know not to ask the student to show them the image?
Is there a protocol for whether and when to involve police? How is parental communication managed in a way that does not escalate in the wrong direction?
If your answer to any of those questions is ‘I am not entirely sure’, you are not alone. And the fact that it matters is no longer theoretical.
What the Internet Watch Foundation has documented
In March 2026, the Internet Watch Foundation (IWF) published findings from their analysis of AI-generated child sexual abuse material. In just one year, they found an exponential increase the volume and severity of AI imagery.
This material is not confined to the dark web. IWF analysts have now found AI-generated child sexual abuse images accessible on mainstream platforms on the open internet.
At the same time, the broader online harm landscape continues to intensify. The UN reported that over a third of young people in 30 countries report being cyberbullied, and 1 in 5 have skipped school because of it.
Among girls and young women globally, UN Women reported that up to 58% have faced digital abuse.
These are not risks that will arrive eventually. They are already arriving in schools across the world: through disclosures, through behaviour changes, through the online lives that students carry into school every morning and back into their homes – and in boarding schools, into their dormitories – every night.
The question our safeguarding survey asked
We asked more than 4,500 school professionals whether they had dealt with a safeguarding incident where AI-generated content played a role, and how confident they felt about their current protocols for handling such cases.
The answers are in our 2026 International Safeguarding Report – and we will publish them during Safeguarding Awareness Week.
What we can say now is that the picture is more varied than most school leaders might hope. Some schools have thought this through carefully. Many have not, and they are not unusual for that.
The rate at which this risk has evolved has outpaced most schools' ability to update their training and protocols.
Why online harm is not a standalone curriculum problem
When a new online risk emerges, the temptation is to respond with a lesson, run an assembly or send a parent newsletter about screen time.
Those responses are not wrong, but they are insufficient on their own. Online harm in 2026 is a safeguarding infrastructure problem, not a curriculum problem.
The student who was harassed in a group chat last night does not arrive at school the next morning holding a sign that says ‘online incident’. They arrive withdrawn, distracted, refusing to go to certain lessons – or they don’t arrive at all.
Safeguarding and child protection systems that treat online harm as a separate category from mental health, attendance and peer conflict will miss those presentations consistently.
Our 2026 International Safeguarding Report makes the case, with data, for treating these as one integrated operational picture.
The particular challenge of boarding
For international schools with boarders, online harm carries a specific dimension that day-school frameworks do not fully address.
A boarder who is being harassed online does not go home to a parent who might notice. The school is their home, and the device that connects them to family – potentially across time zones – is also the device through which harm can reach them at any time.
That requires specific child protection protocols, not general guidance. Our 2026 International Safeguarding Report addresses boarding-context online safeguarding directly, drawing on survey responses from international school professionals across more than 50 countries.
What the report covers on online harms and AI
Our 2026 International Safeguarding Report publishes survey data on how schools across the world are currently experiencing and responding to online harms and AI-generated risk.
It includes practical guidance on protocol design for AI-harm disclosures, what to do in the first 24 hours, and how to build staff confidence in a risk category that most training has not yet caught up with.
Pre-register now to be the first to receive the report when it publishes during Safeguarding Awareness Week.