The method makes use of microscopic electrodes that are implanted on the patient’s brain surface.
These electrodes pick up electrical signals coming from the area of the brain that controls facial and spoken expressions. The avatar’s voice and facial expressions are then created from these signals.
It might be suffocating to be unable to talk or convey one’s thoughts. You have a predicament you wouldn’t wish on anyone when you combine it with a significant impairment like the paralysis that originally caused it. But regrettably, a lot of people meet this end.
Researchers at the University of California have created a system that enables a paralyzed person without speech to “speak,” which could be of great assistance to these patients and their loved ones.
The researchers have made assistance available through a computer avatar that can exhibit surprise, smile, and other facial expressions in addition to speaking.
However, how does the avatar choose its words?
Signals from a brain-computer interface (BCI) are sent to the avatar.
The method makes use of microscopic electrodes that are implanted on the patient’s brain surface. These electrodes pick up electrical signals coming from the area of the brain that controls facial and spoken expressions. The voice and facial emotions of the avatar are then created from these signals.
The avatar ‘gets’ what the person with paralysis wants to communicate by way of what he ‘thinks,’ according to a simplistic explanation of the technology.
The effort was headed by Prof. Edward Chang at the University of California, San Francisco (UCSF). “Our goal is to restore a full, embodied way of communicating, which is really the most natural way for us to talk with others,” he added. We are a lot closer to offering patients a practical option thanks to these developments.
The Guardian cited him in an article.
Ann (47) is the patient in this instance. She apparently experienced a brainstem stroke, and for more than 18 years, she has been badly paralyzed.
On Ann’s brain’s surface, researchers inserted a tiny, paper-thin sheet with 253 electrodes. An area that was essential for speech was covered by the implant. The devices recorded brain impulses that would have typically been in charge of her larynx, jaw, tongue, and face.
Ann then collaborated with the group to train an AI model and calibrate it using her specific brain signals.
The computer was able to learn 39 different noises as a result. The next step was to ‘translate’ Ann’s brain signals into sentences using a language model that was similar to ChatGPT.
You can now contribute to the community by writing for wionews.com. Please share your experiences and thoughts with us here.