A three-month-old baby sits before what appears to be a nightmarish version of Mickey Mouse from the future. It’s a sleek white, slightly misshapen Mickey, and it’s making strange whooshing noises.
It is called AABy. It’s a robot. And its creator, a celebrated professor of neuroscience, believes that it might accelerate language development in young children by up to 18 months.
In the 8 April issue of TES, science writer Kat Arney took a closer look at AABy.
It was created by April Benasich, professor of neuroscience at Rutgers University in Newark, New Jersey. Benasich is a specialist in early language development. She and other scientists around the world have made significant progress in the past five years in learning about how children acquire language.
She explains how her research has shown that they achieve this feat by listening to tiny variations in the sounds around them and working out whether they’re likely to be parts of words or not. From there, they use information from the speech they hear around them – from mum, dad, the TV, or anyone else in the vicinity – to assemble this knowledge into the components of language.
“What happens is that there are representations – what we call acoustic maps – getting set up in the brain,” she explains. “Over time babies gradually do something called perceptual narrowing. They first set up these pre-linguistic maps, and as they hear more and more things in their environment they gradually begin to ignore sounds that are not part of the language - you can see that in the changes in their brain patterns.”
Just by looking at the EEG (a net of tiny sensors put on a baby's head called an electroencephalogram) traces as a six-month-old baby hears and maps different sounds, Benasich says that she can predict with 90 per cent accuracy whether they will have language problems by the age of 3 and be likely to struggle with literacy later on.
But diagnosis is only useful if you can do something about it – and that has been Benasich’s main concern for the past few years.
Drawing on previous research with rats showing that it’s possible to influence and even reprogramme the animals’ responses to sounds, her idea was to find a way to train babies to set up more effective language maps in their brains. The solution comes in the form of AABy.
Perched on a bendy tripod, the prototype has a black plastic semicircle across its ‘face’, like the visor on a motorbike helmet, which contains an eye-tracking device. The left ‘ear’ contains a small video screen, while the right-hand one flashes with coloured LED lights.
Once a baby is put in front of it, strapped into an infant seat, the robot gets to work. First, the coloured lights flash to grab the child’s attention, then sounds start to play – swooshy-sounding sweeps designed to mimic the complex tones found in language. These are the exact kinds of sounds babies need to pay attention to as they develop the sonic maps in their brains. Then just as the tone subtly changes, a short video clip plays in the screen, acting as a reward. “We’re using little snippets of Sesame Street for now – they love it!” Benasich laughs.
Once the baby has got the hang of what’s happening – when the sound changes, the video plays – then the real training can begin.
Watched by the eye tracker, the baby learns to recognise when the sound changes, flicking its gaze towards the video screen in anticipation of the dancing puppets. If it gets it right then the tasks get harder, pushing them to disentangle ever more subtle shifts in tone and complexity. If the baby’s attention starts to waver, the LED lights flash again to snap them back into it. And if it falls asleep or loses interest altogether, the unit eventually switches off.
Does it work? Benasich tested 49 four-month-olds, measuring their brain responses before and during sound training sessions and then again three months later. A third of the group got active training, with the sounds getting harder as they got better at spotting them changing. Another third passively received a random mixture of easy and hard-to-discriminate noises, which they couldn’t control, while the rest weren’t trained at all. The results, published in the Journal of Neuroscience in 2014, were impressive.
“What we found was that there was this huge improvement in acoustic mapping, so the babies that had the active intervention responded very quickly and efficiently. Following the babies further, Benasich has exciting preliminary results suggesting that the training might provide a boost for language development by 18 months.
Hope for the future?
This is very early research and Benasich is unwilling to make any substantial claims as yet. She’s reluctant to do so until all the data are in and have been published, and it’s certainly too soon to speculate that the intervention will help kids to read that might otherwise struggle.
And in the feature, there are some reservations expressed by child psychologists as to why you would want to intervene at such a young age and how the intervention may be used.
But Benasich is hopeful for the future.
"This kind of intervention may make them more able to take advantage of what's in their environment. If you change the threshold so that the subset of kids that really need help are much more efficient, I think you could actually shift the distribution of the number of kids that have learning disorders, so I hope that it will make a big difference.”
This is an article from the 8 April edition of TES. This week's TES magazine is available in all good newsagents. To download the digital edition, Android users can click here and iOS users can click here