Learning about listening: Researcher explores how people perceive sound
The Ƶ’s Christina Vanden Bosch der Nederlanden remembers the moment she became interested in how humans perceive sound.
She was playing the cello in Grade 5 and it was her section’s turn to perform a melody line. As she played her instrument, she was suddenly struck with a strong emotional response.
“The string vibrated, which brought sound vibrations to my ear, and then I got chills. I wondered, how is this happening?” she remembers.
When she brought up the sensations to her conductor, he was familiar with the experience and suggested literature for her to read about how people react to sound.
That early experience sparked der Nederlanden’s lifelong curiosity about how people hear sound, leading to her current research on why individuals focus on the human spoken voice over all other sounds in everyday situations.
“A lot of my research has shown that, even from four months of age, our attention is biased to pick up the human voice if there are different sounds playing,” says der Nederlanden, an assistant professor of psychology at U of T Mississauga who also heads the university’s .
“We are biased to listen to speech above all other sounds,”
Der Nederlanden explains that when somebody is listening to another person speaking, there might be many other competing sounds happening around them – such as a car going by, a coffee maker beeping or a refrigerator humming. Yet, despite these distracting sounds, we are still able to pay attention to what’s most relevant: the person who is talking.
For years, der Nederlanden studied this phenomenon known as attentional speech bias – and has recently been awarded two NSERC Discovery Grants to better understand why people’s attention is biased to pick up a human voice while many other sounds are happening at the same time.
As part of her NSERC-funded project, “Predicting listeners' attentional bias toward the human voice: perceptual, neural, and semantic factors,” der Nederlanden and her research team will investigate the many factors at play when looking at attentional speech bias.
The team will look at how human development plays a role – including whether people at an early age are innately biased towards acoustic characteristics that are unique to the human voice.
The project will also measure participants’ brain activity to see how their brains track environmental sounds – such as a dog barking or a train going by.
The project is the latest in der Nederlanden’s research that looks at how humans perceive sound. As principal investigator at the LAMA Lab, she and her colleagues study what's relevant in our busy auditory worlds for communication.
Building on der Nederlanden’s previous research, the research team is studying whether babies know the difference between speech and song and how the brain processes music and speech in early development.
“When in development do we know that speech and song are different and require different spheres of knowledge? When in development do we learn these things – and is it important for us to learn these things earlier in development so that we can be good communicators?” der Nederlanden says.
She hopes her research might help to develop training techniques for individuals who struggle with language, including children with dyslexia and autism.
“I’d really like to get connected with some hospitals and local organizations in the area to start seeing how we can work with them, and ask how musical interventions, alongside traditional interventions, could be used to help kids who struggle to pay attention to what’s relevant for language and communication.”