Brain implant converts thoughts into words almost in real time • Technology • Forbes Mexico

0
6


A brain implant that uses artificial intelligence managed to convert almost simultaneously speech the thoughts of a paralyzed woman, American researchers indicated on Monday.

Although so far it has only been tested in a person, this new achievement with an implant that connects brain waves to a computer generated hope that other people who have completely lost the ability to communicate can recover their voice.

California’s team of researchers had previously used a cerebro-ordered interface (BCI) to decode Ann’s thoughts, a 47-year-old woman with quadriplegia; and translate them into speech.

However, there was a delay of eight seconds between the generation of thoughts and the production of speech read aloud by a computer.

This meant that maintaining a fluid conversation was out of reach for Ann, a high school mathematics professor who has not been able to speak since she suffered a stroke 18 years ago.

But the new team model, presented in Nature Neuroscience, transformed Ann’s thoughts into a version of what was his voice with just a delay of 80 milliseconds.

“Our new real -time approach converts brain signals into his personalized voice almost immediately, in less than a second since he tries to speak,” the main author of the study, Gopala Anumanchipalli, of the University of California, in Berkeley, told AFP.

Anumanchipalli added that Ann’s final goal is to become a university counselor.

“Although we are still far from achieving that for Ann, this advance brings us closer to drastically improve the quality of life of people with vocal paralysis,” he said.

Do not miss: Trump states that Tiktok sales agreement will be reached before Saturday

“Excited when you hear her voice”, thanks to the brain implant

During the investigation Ann could see sentences on a screen – of the type “then you love me” – that she pronounced to herself in her mind.

These brain signals were quickly turned into their voice, that the researchers rebuilt from recordings prior to their injury.

Ann was “very excited to hear her voice and reported a feeling of corporality,” said Anumanchipalli.

The model uses an algorithm based on an artificial intelligence technique (AI) called deep learning, which was previously trained from thousands of phrases that Ann tried to pronounce silently.

The model is not totally accurate, and the vocabulary is limited at the moment to 1,024 words.

Patrick Degenaar, an expert in neuroprosthesis at Newcastle University in the United Kingdom, who did not participate in the study, stressed that this research is a “very early test” of the effectiveness of the method.

Still, it’s “great,” he said.

Degenaar said that this system uses a series of electrodes that do not penetrate the brain, unlike the BCI used by Elon Musk’s Neuralink company.

The method to install these electrodes is relatively common in hospitals where epilepsy is diagnosed, which means that this technology could be easily implemented on a large scale, he added.

With agency information.

Do you like photos and news? Follow us on our Instagram

Follow technology information in our specialized section


LEAVE A REPLY

Please enter your comment!
Please enter your name here