A paralyzed woman who “talks” with brain signals has been transformed into a world’s first talking avatar

Ann, connected to the decoder – UCSF credit via SWNS

A paralyzed woman has been able to speak again after her brain signals were intercepted and turned into a talking avatar, complete with facial expressions and voice samples of the woman’s real voice, all in a world first.

Ann, 48, suffered a brain stem stroke when she was 30, leaving her paralyzed.

Then scientists at the University of California implanted a paper-thin rectangle of 253 electrodes on the surface of her brain to cover an area important for speech. Then they used artificial intelligence to produce a brain-computer interface (BCI).

These “talking” brain signals are intercepted and fed to an array of computers via a cable connected to a port in her head.

Computers can decode the signals into text at a rate of 80 words per minute, while an audio recording of her voice from her wedding day years before her stroke reproduced her voice and then gave it to an on-screen avatar that she used with a face. expressions.

And the team from the University of California, San Francisco, says it’s the first time that speech or facial expressions have been synthesized from brain signals.

“Our goal is to restore a complete and embodied way of communicating that is, in fact, the most natural way for us to talk to others.” He said Dr. Edward Chang, chief of neurosurgery at the University of California, San Francisco. “These advances bring us much closer to making this a real solution for patients.”

For several weeks, Ann worked with the team to train the system’s AI algorithms to recognize her brain’s unique signals for speech.

This involved repeating different phrases from a 1,024-word conversational vocabulary over and over, until the computer recognized patterns of brain activity associated with the sounds.

Instead of training the AI ​​to recognize whole words, the researchers created a system that decodes words from sound clips. The word “Hello,” for example, has four phonemes: “HH,” “AH,” “L,” and “OW.”

With this approach, the computer only needs to learn 39 phonemes to decode any English word. This improved the accuracy of the system and made it three times faster.

More crazy tech like this: For the first time, a person with a severed spinal cord can walk freely, thanks to new Swiss technology

“Accuracy, speed, and vocabulary are critical,” said Sean Metzger, who developed the text decoder at the joint Bioengineering Program at UC Berkeley and UC San Francisco. “It’s what gives the user the ability, at the right time, to communicate at nearly the same speed we do, and to have conversations that are more natural and natural.”

Using a custom machine learning process that allowed the company’s software to intertwine with signals sent from her brain, the computer avatar was able to mimic Ann’s movements, making the jaw open and close, the lips protruded and pursed, and the tongue raised and moved. downward, as well as facial movements of happiness, sadness, and surprise.

The team is now working on a wireless version which means the user does not have to connect to computers.

More handicapped recovery: Revolutionary music therapy helps paralyzed man walk and talk again – it ‘opened up the brain’

The current study, published in the journal Nature, adds to previous research by Dr Chang’s team in which they decoded brain signals into text in a man who also had a stroke many years earlier.

But now they can decode the signals and turn them into the richness of speech, along with the movements that move a person’s face during a conversation.

Watch the story and technology in action from UCSF…

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button