Music is a direct pathway to emotions and memories stored within the depths of the brain. Everyone knows there may be a way of profound connection between music to the brain’s neurons. Likewise, someone’s brain activity patterns might also influence how they process musical notes, in response to a recent study.
The study, which was published within the journal PLOS Biology, suggests a direct link between brain activity and musical perception, which researchers imagine could revolutionize the technology that helps speech-impaired people to talk.
Such devices, referred to as neuroprostheses, are used to assist individuals with paralysis to compose text by merely imagining writing it. Similarly, a few of these devices have been designed to permit people to construct sentences using their thoughts. Nevertheless, when considering the aspect of speech, there was a notable challenge in capturing the natural rhythm and emotional nuances present in spoken language, termed “prosody.”
Until now, studies have not been capable of achieve a more natural and human-like sound. In consequence, we’re left with mechanical sounds that lack proper intonation.
The team used music, which naturally has each rhythmic and harmonic elements, to create a model for deciphering and recreating a sound with richer prosody. They managed to decode a song from the brain recordings of a patient.
“At once, the technology is more like a keyboard for the mind,” lead writer Ludovic Bellier, of the University of California, Berkeley, said in an announcement. “You possibly can’t read your thoughts from a keyboard. You’ll want to push the buttons. And it makes type of a robotic voice; obviously there’s less of what I call expressive freedom.”
Researchers are optimistic that their study could bring about improvements in brain-computer interface technology.
“As this whole field of brain-machine interfaces progresses, this offers you a option to add musicality to future brain implants for individuals who need it,” explained Robert Knight, a UC Berkeley professor of psychology within the Helen Wills Neuroscience Institute. “It gives you the flexibility to decode not only the linguistic content but a few of the prosodic content of speech, a few of the effect. I feel that is what we have really begun to crack the code on.”
Published by Medicaldaily.com