Researchers have reconstructed a recognizable version of the Pink Floyd song “Another Brick in the Wall” directly from recorded brain activity.
Using advanced machine learning techniques, the team was able to extract enough acoustic information from listeners’ brain signals to identify the song and recreate an intelligible version.
Key highlights:
- Researchers recorded brain activity via electrodes in the auditory cortex as participants listened to Pink Floyd.
- They used complex decoding models to reconstruct the song spectrogram from the recorded neural signals.
- The reconstructed song was identifiable as Pink Floyd, showcasing how much music info is encoded in the brain.
- The study provides new insight into how the brain processes complex musical stimuli.
Source: PLOS Biology. Aug 15, 2023
Neuroscientists reconstruct Pink Floyd from Brain Activity
A team of neuroscientists has achieved a remarkable feat – reconstructing a popular rock song directly from brain activity.
Recording the neural responses of listeners as they heard Pink Floyd’s “Another Brick in the Wall,” the researchers used advanced machine learning to decode the brain signals and recreate an identifiable version of the song.
Published in PLOS Biology, the study demonstrates just how much acoustic information about music is represented within our auditory cortex.
Using the reconstructed song spectrogram, the researchers generated an audio waveform that is clearly recognizable as Pink Floyd, complete with the melody, vocals, guitar, drums and basic timbres of the instruments.
This fusion of neuroscience and music highlights the power of brain decoding techniques and provides new insight into how our auditory system processes the complex, multidimensional stimuli of music.
How They Recorded Brain Activity During Music Listening
The researchers recorded the neural activity of 29 epilepsy patients who had electrodes implanted in their brains for clinical treatment purposes.
This allowed the team to directly measure brain signals from the auditory cortex – the key region of the brain involved in processing sound and music.
As the patients passively listened to “Another Brick in the Wall,” the activity from over 2,600 electrodes was recorded.
The researchers focused their analyses on the high-frequency activity (HFA) signals, which are linked to local brain processing.
Decoding Music from the Auditory Cortex
The core finding was that, by applying machine learning models to these recorded HFA signals, the researchers were able to reconstruct parts of the musical stimulus – essentially decoding the song from patterns of brain activity.
They did this by training algorithms to match up the HFA responses to the timing, frequencies and volumes of the original song spectrogram.
The more complex, nonlinear models proved superior, yielding reconstructions that were identifiable as “Another Brick in the Wall” and captured elements like the vocals, melody and harmony.
The reconstructed song sounded more muffled than the original, but demonstrated how much musical data is represented in the fine-grained activity of the auditory cortex.
Insights into Music Processing in the Brain
Beyond the headline results, the study also provides new insights into how our brains make sense of music:
Right-hemisphere dominance – More electrodes showed music-related activity on the right side of the auditory cortex, and right-sided activity was more important for decoding, indicating right-hemisphere dominance for music processing.
Primary role of the superior temporal gyrus (STG) – The STG contained the most electrodes encoding music information and was critical for accurate decoding, highlighting the STG’s primary role in music perception.
Sustained/onset response patterns – Different Neural populations within the STG exhibited distinct encoding patterns in response to musical features – some responding transiently to note onsets while others showed sustained responses, similar to patterns previously observed for speech.
Rhythm-tuned region – A subset of electrodes within the STG selectively responded to the musical rhythm at 6.6Hz, demonstrating a specialized region tuned to processing rhythm.
Single-subject feasibility – The researchers reconstructed identifiable music from the STG activity of just one subject, showcasing the potential to extract music from limited neural recordings for brain-computer interface applications.
Overall, by reconstructing a complex musical stimulus from the brain itself, the study demonstrates the power of neural decoding techniques and how much rich information about music is represented within our auditory system.
A New Paradigm for Studying Music Cognition
From a methodological perspective, the researchers state that stimulus reconstruction represents a new paradigm for studying our perception of complex real-world stimuli like music and speech.
While most neuroscience studies use simplified stimuli like tones and syllables, reconstructing an actual song provides a more ecologically valid window into real neural functioning.
Given songs unfold over time in a continuous stream, this reconstruction approach also allows investigating the temporal dynamics of music processing – as opposed to brief snapshots from imaging studies using fMRI.
Studying music cognition through reconstruction may enable future studies assessing how factors like musical training or culture affect perception.
The technique also shows promise for brain-computer interface applications, such as generating music directly through imagined sound.
By validating stimulus reconstruction as a powerful approach to study real-world cognition, this pioneering study paves the way for future breakthroughs in understanding the neuroscience of music as well as the complex stimuli that fill our lives.
Reference
- Study: Music can be reconstructed from human auditory cortex activity using nonlinear decoding models
- Authors: Ludovic Bellier et al. (2023)