By seeing how our brains react to the sound of music, researchers believe they could find a way to enable the speechless to talk again.
While we may internally feel our emotional reaction to a piece of music, our ability to know what happens when we hum a tune in our head has been a mystery – until now.
When we listen to music being played, we can record and analyse the neural responses that each sound produces as it is heard.
This, of course, isn’t the case when we hum a tune in our heads, as there is no auditory response to attribute it to.
Simple, but effective
To help solve this, a team of researchers at the École Polytechnique Fédérale de Lausanne in Switzerland and the University of California, Berkeley, has published a paper in the journal Cerebral Cortex detailing its findings using a human-machine interface.
Working with an epileptic patient, the researchers asked the test subject to play a piece of music on an electric piano with the sound turned on, and recorded their brain’s response.
The patient was then asked to play the song again on the piano with the sound turned off, but they were asked to play the song in their head at the same time.
Brain activity was then measured a second time. This time, the music came from the mental representation made by the patient; the notes themselves were inaudible.
This means that when we imagine music in our heads, the auditory cortex and other parts of the brain process auditory information, such as high and low frequencies, in the same way they do when stimulated by real sound.
Deeply invasive technique
Gathering this information was a difficult process, given that the human-machine interface used a technique called electrocorticography, which involves implanting electrodes quite deep inside the patient’s brain.
Usually used to treat people with epilepsy who cannot take medication, the technique can measure brain activity with a very high spatial and temporal resolution – a necessity, given just how rapid neuron responses are.
This led the team to mapping out the parts of the brain covered by the electrode. The aim is to one day apply these findings to language, for people who have lost their ability to speak.
“We are at the very early stages of this research,” said Stéphanie Martin, lead author of the study.
“Language is a much more complicated system than music – linguistic information is non-universal, which means it is processed by the brain in a number of stages. This recording technique is invasive, and the technology needs to be more advanced for us to be able measure brain activity with greater accuracy.”