A group of neuroscientists reconstructed a Pink Floyd classic through artificial intelligence and brain waves

A group of neuroscientists reconstructed a Pink Floyd classic through artificial intelligence and brain waves

Neuroscientists at the University of California have managed to transform brain waves into music. This occurred in an effort to understand in greater detail how the brain works when it comes into contact with a song. To reach this milestone, the researchers brought together 29 people who would undergo surgery to treat epilepsy.

A total of 2,668 electrodes were connected to the brain of the group of people, specifically at three key points of the cerebral cortex: the area of sensory perception (CMS), the inferior frontal gyrus (GFI) and the first temporal gyrus (GTS). At the same time, the team of researchers reproduced one of the greatest rock classics: Another Brick in the Wall Pt.1, by the British band Pink Floyd.

With all the elements ready, the researchers began to record the brain waves that were stimulated by the music. With the help of a trained artificial intelligence, the waves managed to become audible fragments of the song, such as the iconic chorus reciting All in all it’s just another brick in the wall (After all, it’s just another brick in the wall.)

This is explained by the team formed by Ludovic Bellier, Anaïs Llorens, Deborah Marciano, Aysegul Gunduz, Gerwin Schalk, Peter Brunner, Robert T. Knight, in the publication of the journal Plos Biology where the discovery was announced.

We successfully reconstructed a recognizable song from direct neural recordings and quantified the impact of different factors on decoding accuracy. Combining coding and decoding analyses, we found a right-hemisphere dominance for musical perception with a primary role of the superior temporal gyrus (STG), evidenced a new STG subregion tuned to musical rhythm, and defined an anterior-posterior STG organization that shows sustained and initiating responses to musical elements.”

While this is a historic milestone, it is only a first step in the right direction. Robert Knight, neurologist and professor of psychology at UC Berkeley’s Helen Wills Institute for Neuroscience and one of the researchers involved in this experiment, confessed that the sound that AI was able to emulate It resembles only 43% of the original song. In addition “Sounds a bit like they’re talking underwater”Knight said.

However, the researcher was optimistic to be the first time that this experiment was done, which, as mentioned in the publication of Plos Biology, yielded other very interesting results. AI coding models They revealed a new cortical subregion in the temporal lobe, which underlies rhythm perception. This, experts say, could work to develop new interfaces that connect machine and human.

Robert Knight shared his excitement at the results of the research, as well as giving some hints of how this knowledge may be used in the not-too-distant future.

“It’s a wonderful result. For me, music has prosody and emotional content.. As the field of brain-machine interfaces advances, This allows musicality to be added to future brain implants for those who need it. For example, people with amyotrophic lateral sclerosis (ALS) or another neurological or developmental disorder that affects speech. It gives you the ability to decode not only linguistic content, but also some of the prosodic content of speech, part of affect. I think that’s what we’ve really started to figure out.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here