Skip to main content

Nanny works in mysterious ways

A story about artificially intelligent "nanniebots" patrolling chatrooms to catch internet paedophiles appeared in the New Scientist magazine last week. Predictably it caused a flurry of media interest.

Jim Wightman, an IT consultant from Wolverhampton and the man behind, said he had created a program that could engage perverts impersonating children in online conversation, convince them it was a child and detect signs of "grooming".

But some in the techie community say that some of the conversations the nanniebots have had with journalists and IT experts suggest human characteristics way beyond current artificial intelligence capabilities.

Cameron Marlow, a computer boffin at the Massachusetts Institute of Technology, agreed with Wightman to set up a test in which a "nanniebot" could talk to an AI program designed by Cameron.

Unknown to Mr Wightman, Cameron's program was a friend pretending to be a computer. The nanniebot's responses suggested a similar impersonation might be happening on Mr Wightman's side of the conversation. After some minutes, the nanniebot got bored with its interlocutor: "Sorry mate, but he's not very good! Sorry!" Cameron's friend, staying in character, wrote: "I am sorry, I don't follow. Please explain."

The nanniebot persisted: "Cameron, are you there? I don't like talking to this bot thingy."

Mr Wightman assures us that the nanniebot is not him tapping away at a computer, but Duncan Graham-Rowe, author of the New Scientist article, admits he has doubts.

"It is a very difficult one to call. If he is just pulling a fast one then he is doing it in a very clever way, " he said.

Log in or register for FREE to continue reading.

It only takes a moment and you'll get access to more news, plus courses, jobs and teaching resources tailored to you