A man with paralysis has managed to grab, move and drop objects with a robotic arm connected to a device that transmits the signals of his brain to a computer. The patient was able to perform all these actions simply imagining them.
The device, called the Cerebro-Order (BCI) interface and developed by researchers from the University of California in San Francisco, worked seven months without requiring any adjustment. Until now, these types of devices only worked for one or two days.
The BCI uses an artificial intelligence model (AI) that can adapt to the small changes that occur in the brain when a person repeats a movement (in this case, an imagined movement) and learns to do so in a more refined way.
“This combination of learning between humans and Ia is the following phase of these brain-computer interfaces and that is what we need to achieve a sophisticated function and similar to that of real life,” said Karunesh Ganguly, neurologist at the Weill of Neurosciences of the UCSF.
The details of the device, financed by the National Institutes of Health of the United States (NIH), are published this Thursday in Cell magazine.
The key to this innovation was to know how the activity in the brain day by day changes when a study participant repeatedly imagines that makes specific movements. When the AI was scheduled to take these changes into account, it worked for months in a row, the authors explain.
Sensors and brain connectivity with the robotic arm
For months, Ganguly studied how patterns of brain activity in animals represent specific movements and saw that these representations changed day by day as the animal learned. He suspected that the same thing was happening in humans and that is why their BCI lost so quickly the ability to recognize these patterns.
With that information, his team worked with a patient who was paralyzed after a stroke and could not speak or move.
The man had small sensors implanted on the surface of his brain that could capture brain activity when he imagined move.
To see if his brain patterns changed over time, Ganguly asked him to imagine that he moved different parts of his body and although he could not move them, his brain generated signs that the BCI recorded through the brain sensors.
The team discovered that the shape of representations in the brain remained the same, but its location changed slightly from one day to another.
Then, Ganguly asked the participant to imagine making simple movements with their fingers, hands or thumbs during the course of two weeks, while the sensors recorded their brain activity to train AI.
Subsequently, the patient tried to control an arm and a robotic hand, but the movements were not yet very precise.
Ganguly made him practice with a virtual robotic arm that gave him feedback on the precision of his visualizations and finally, he got the virtual arm to do what he wanted.
When the patient began to practice with the real robotic arm, he only brought a few sessions transfer their skills to the real world and make the robotic arm collect blocks, turn them and move them to new locations. He could even open a closet, get a cup and bring it closer to a water dispenser.
Months later, the patient could still control the robotic arm and only needed a minimum ‘tuning’ to adjust the way his movement representations had changed since he began using the device.
The team is perfecting AI models to make the robotic arm move faster and faster to test it in a domestic environment.
For people with paralysis, the ability to feed or drink water would change their lives, something that Ganguly believes that it is within our reach: “I am very sure that we have learned how to build the system now and that we can make it work,” he said.
With EFE information.
Follow technology information in our specialized section
Do you like photos and news? Follow us on our Instagram