Researchers from the University of Cambridge say there is more than meets the eye in parents cooing to their infants. They have found that the often involuntary singsong baby talk of parents when they converse with their infants helps activate language learning in the infants.
Parents, when they gaze into their newborn’s eyes, instinctively break into ‘parentese’ – a rhythmic and funny way of talking with a lilt, peppering high and low pitches with what seem like to be meaningless endearments uttered in soothing tones. The study published in Nature Communication found that babies’ brains respond better to this rhythmic talk — better than regular complete sentences – and aids in the child’s language learning.
“Singing is particularly effective, but so are rhythmic activities like bouncing your baby as you sing. The more you baby talk, the better for later outcomes,” says Prof Usha Goswami, the study’s lead author and director at the Centre for Neuroscience in Education, University of Cambridge, UK.
As a part of her Cambridge UK Babyrhythm Study, Prof Goswami had previously found evidence that rhythm plays a key role in understanding language. She further wanted to know how it affects children who have language disorders.
Surfing the rhythms
The Cambridge researchers made 50 infants watch videos of a primary school teacher singing nursery rhymes. They recorded the infants’ brain activity with an electroencephalogram (EEG), a non-invasive screening technique. They recorded the brain activity when the babies were four, seven and 11 months old.
The team used special software to pick up the soundwaves of the rhymes as pressure waves in the infants’ brains and analyse them. This program could analyse which speech sounds the babies were processing in their brains. It decoded the brain wave patterns and read them as though the person was hearing or vocalising them based on brain patterns alone. This allowed the researchers to measure how infants’ brains encode acoustic (sound waves) and phonetic (sounds of spoken language) information in continuous natural speech.
Prof Goswami says they found that the brain’s speech and rhythmic information pathways are intertwined. “Brain rhythm surfs the speech rhythm. Brain waves have different speeds, and each wave aligns itself to different acoustic information at the same speed in speech,” she explains.
Rhythm first, then speech
According to the study’s findings, initially babies’ brains responded well to rhythm. Until seven months of age they did not process phonetic information successfully. The babies’ grasp of phonetic sounds became better after 11 months, with their brains learning to process them gradually during the first year.
The researchers believe rhythm and tone help emphasise the word’s syllables (which is the key to learning a language). They saw babies respond better to learning language when it is conveyed rhythmically in the earlier months, as they relied on rhythm patterns to guess where one word ends and the other begins. So, while babies do not necessarily grasp the sounds of words early on in life, they can understand sentence structure better through rhythms and tunes.
Prof Goswami says their current research lays the foundation for understanding what happens in the brain of children with dyslexia or developmental language disorders. “Key parameters of acoustic rhythm are perceived less in such children,” she says. And as they delve deeper, she hopes to find more answers on language processing in the brain.
Link to study: Emergence of the cortical encoding of phonetic features in the first year of life