The brain-armed interfaces allow to control devices by brain signals. The use of artificial intelligence (AI), as a co -pilot, helps infer the user’s intention to complete more effectively tasks moving a robotic arm or the course of a computer.
Engineers of the University of California in Los Angeles UCLA (EU) developed a Cebrero-Ordero non-invasive and assisted by AI interface, although at the moment it has only been tested with a patient with leg paralysis and with three without any condition.
The use of AI improved almost four times the person’s performance with paralysis in tasks such as moving the cursor of a computer or driving a robotic arm.
The team that publishes a study at Nature Machine Intelligence indicated that the results are promising, but it is necessary to continue working to adapt the system to different users and environments.
The system can read brain activity through electrodes and use automatic learning to improve movements control.
Many everyday actions are aimed at objectives and follow predictable patterns, such as using a computer or taking objects, and the co -pilot of AI can interpret user actions and help you with movements.
The team developed custom algorithms to decode electroencephalography, a method that records the electrical activity of the brain, and extract signals that reflect the intentions of movement.
The next step was to match the decoded signals with an artificial intelligence platform based on cameras that interprets the address and intention of the user in real time.
You might interest you: a brain-ruler interface allows a video game to play a person with paralysis
AI interface could give independence to people suffering from paralysis
The participants carried a helmet to register the electroencephalography, while with an artificial vision system, the personalized AI deduced the intention of the users (not their ocular movements) to help complete two tasks.
The person suffering from paralysis achieved a 3.9 -times higher performance in the control of the computer cursor than without the help of the co -pilot of AI, and could control a robotic arm to move colored blocks to specific objectives, a task that without the support of artificial intelligence could not complete.
Participants without paralysis experienced a performance 2.1 times higher after AI activation.
This shared control model can help people with limited physical abilities, such as those suffering from paralysis or neurological diseases, manipulating and moving objects more easily and precision.
Researchers want to develop, ultimately, CEBRONDOR-ORDER INTERFACE SYSTEMS assisted by AI who “offer shared autonomy, allowing people with movement disorders, such as paralysis or ELA, to recover some independence for daily tasks”, in the words of Jonathan Kao of the UCLA and one of the signatories of the article.
The latest-generation cerebro-organizer interfaces, surgically implanted, can translate brain signals into commands, but the benefits they are currently offering are overcome by the risks and costs associated with the necessary neurosurgery to implement them, the UCLA indicates.
The next steps for interface systems with AI could include the development of more advanced co -pilots that move the robotic arms with faster speed and precision, and offer a skillful touch that adapts to the object that the user wants to grab, according to the equipment.
With EFE information
Do you use more Facebook? Let us like to be informed