What if your classroom bot learned to be racist?

AI can, and has, led to life-changing blunders when deployed using bad data
18th May 2018, 12:00am
Magazine Article Image

Share

What if your classroom bot learned to be racist?

https://www.tes.com/magazine/archived/what-if-your-classroom-bot-learned-be-racist

Imagine a vital public service that decided to harness the power of artificial intelligence (AI) to help make important decisions about people’s lives. Now imagine that the same technology taught itself to be racist and penalised black people while favouring white people.

It may sound like a nightmare vision from the pen of a Hollywood sci-fi screenwriter, but it has already happened in real life - and the scenario typifies some of the concerns people have about the potential effects of a greater reliance on technology to educate children.

The example comes from the US justice system, where a number of states employ the Correctional Offender Management Profiling for Alternative Sanctions system, which uses AI to assign risk ratings to defendants to help judges make sentencing and parole decisions.

Peers sitting on the House of Lords Select Committee on AI have heard evidence of investigations that “found that the system commonly overestimated the recidivism risk of black defendants, and underestimated that of white defendants”.

The problem is one of unconscious bias, which raises fears about what might happen if similar glitches arose in the use of AI in schools.

Sometimes the bias results from unrepresentative data being fed into the AI systems; disadvantaged communities can suffer from “data poverty”, which means they are underrepresented in datasets.

Some biases might only emerge when an algorithm processes data. The Leverhulme Centre for the Future of Intelligence has warned that some algorithms “might exacerbate certain biases, or hide them, or even create them”.

Other biases could result from the lack of diversity among the people creating the technology, prompting concerns that women and minority ethnic groups could lose out.

It is a problem that peers took seriously enough to call on the government to “incentivise the development of new approaches to the auditing of datasets used in AI, and to encourage greater diversity in the training and recruitment of AI specialists”.

When she spoke the at the Headmasters’ and Headmistresses’ Conference earlier this month, scientist and broadcaster Kat Arney told school leaders to heed the rise of the computer in classrooms, saying that “if you put garbage in, you get garbage out”.

“We need to be really sure that the people building these systems are training them with reliable, good data,” she warned.

“If they train an AI system or a machine-learning system with bad data, you will get bad results from it.”

Privacy fears

The quality of data matters hugely because data is the vital raw ingredient that intelligent technology needs to do its work. And information on children in schools can be highly sensitive, encompassing attainment, behaviour and personality.

For schools in the UK, recent coverage has focused on the widely used ClassDojo app, which awards children positive or negative points for their behaviour.

The company behind it is based in the US, where the data gathered on British children - including photos and video footage - is stored. Some experts question whether parents are giving informed consent or understand the app’s lengthy privacy policy.

Amid frequent news stories of corporate data breaches, further questions about stored information will arise if parents lose confidence in how their children’s data is being used. For Arney, it could lead to a new phenomenon: “data opt-outers”.

“Are we going to see families who go ‘I don’t want my kids to be part of this data landscape’?,” she asks. “It would be data home-schooling: ‘I do not want my children to be in this data ecosystem.’”

If AI became an integral part of delivering education in schools, Arney asks what such data refusal would mean for equality of access and fairness in education.

While some fear AI may open up a divide between those who opt in and opt out, others fear a gulf could open up between the rich and poor. Sir Anthony Seldon, the vice chancellor of Buckingham University and a former private school head (see page 16), is one of the optimists, heralding a future in which AI provides equality of access for all.

But Rose Luckin, an expert in AI and education at UCL’s Institute of Education (see page 17), warns that the opposite could happen. In her pessimistic scenario, schools in disadvantaged areas could in future be forced to rely on cheap models of technology, while those in better-off areas could afford the highly qualified teachers who would add the important human element.

Such an educational “apartheid” would involve “bouncers and minders for poor kids, and a lovely enriched blended experience for the better-off”.

For her, the answer is to educate the public and debate the issues now - and to involve teachers in the development of the edtech of tomorrow.

The need for society to understand AI, and to shape decisions about how it is used rather than be shaped by it, is a common theme of both optimists and pessimists. As the House of Lords committee concluded, “there is a rapidly growing need for public understanding of, and engagement with, AI to develop alongside the technology itself”.


@geomr

You need a Tes subscription to read this article

Subscribe now to read this article and get other subscriber-only content:

  • Unlimited access to all Tes magazine content
  • Exclusive subscriber-only stories
  • Award-winning email newsletters

Already a subscriber? Log in

You need a subscription to read this article

Subscribe now to read this article and get other subscriber-only content, including:

  • Unlimited access to all Tes magazine content
  • Exclusive subscriber-only stories
  • Award-winning email newsletters
Recent
Most read
Most shared