"If you have a stroke in the left hemisphere you are much more likely to have a language impairment than if you have a stroke in the right hemisphere," Sammler says. Moreover, brain damage to certain areas of the right hemisphere can affect a person's ability to perceive music.
The study was inspired by songbirds, Zatorre says.
Studies show that their brains decode sounds using two separate measures. One assesses how quickly a sound fluctuates over time. The other detects the frequencies in a sound.
"We thought, hey, maybe that's what the human brain does too," Zatorre says.
To find out, the team got help from a composer and a soprano. And they created lots of a cappella songs that were just a few seconds long.
Then the team used a computer to alter the recordings. Sometimes they removed information about sound frequencies, which produced a breathy voice a bit like Darth Vader's.
"The speech is perfectly comprehensible, but all the melody is essentially gone," Zatorre says.
Other songs were altered to remove information about how the sound changed over time. That sounds a bit like someone humming a sentence rather than singing the words.
"You can still perceive the melody but you can no longer tell what the speech is," Zatorre says.
Armed with hundreds of altered song fragments, recorded in both English and French, the team set out to learn how a human brain would process these sounds.
The scientists played them for 49 people while an fMRI scanner monitored brain activity. And it turned out that the people decoded sounds the same way songbirds do, by separating a sound's time-related elements from the frequencies it contains, and processing the information using two different groups of specialized brain cells.
As a result, when we hear a song, it engages both hemispheres of the brain in a way that's different than either speech or music alone, Zatorre says.
"That might be why [songs are] especially prominent and especially meaningful" in cultures around the globe, Zatorre says.
But it's not just songs that require both hemispheres working together, Sammler says. That process is necessary to fully experience any type of sound.
Also, the brain circuits involved probably existed before human language appeared, Sammer says.
"Charles Darwin said the languages that we use today emerged from something that was a song-like proto-language," she says.
Now that there's good evidence a song takes two separate paths through the brain, researchers will need to figure out how the brain combines those twin streams of information into a coherent listening experience, Sammler says.
"We perceive the song as a song, right?" she says. "It's one thing and it's not like a speech stream or a melody stream."
Copyright 2020 NPR. To see more, visit NPR.