Recently, I opened completely, not a person, but to a chatbot named Wysa on my phone. He nodded, virtually, he asked me how I felt and kindly suggested that he tried breathing exercises to improve my mental health.
As neuroscientific, I couldn’t help asking me: Did I really feel better or was simply being redirectedly redirected by a well -trained algorithm? Could a code sequence really help calm a storm of emotions?
Mental health tools driven by artificial intelligence are increasingly popular and persuasive. But after their relaxing indications, important questions are hidden: how effective are these tools? What do we really know about its operation? And what are we renouncing in exchange for comfort?
Of course, it is an exciting moment for digital mental health. But understanding the disadvantages and limitations of attention based on AI is crucial.
Applications and bots of meditation and replacement therapy
IA -based therapy is a relatively new actor in the field of digital therapy. But the American Mental Health Applications has boomed in recent years, from applications with free tools that send you text messages to premium versions with an additional function that offers indications for breathing exercises.
Headspace and Calm are two of the best known meditation and mindfulness applications, which offer guided meditations, sleeping stories and relaxing sound landscapes to help users relax and sleep better. Talkspace and Betterhelp go one step further, offering chat certified therapists, video or voice. Happy and Moodfit applications seek to improve mood and combat negative thoughts with games based on games.
At an intermediate point are therapists with chatbots like Wysa and Woebot, who use AI to imitate real therapeutic conversations, often based on cognitive-behavioral therapy. These applications usually offer free basic versions, with payment plans ranging from 10 to 100 dollars per month for more complete functions or access to certified professionals.
Although they are not specifically designed for therapy, conversational tools such as Chatgpt aroused curiosity about the emotional intelligence of AI.
Some users resorted to Chatgpt in search of mental health advice, with disparate results, including a widely disseminated case in Belgium, where a man committed suicide after months of conversations with a chatbot. On the other hand, a father seeks answers after the murder of his son by the police, claiming that the anguishing conversations with a chatbot of AI could have influenced his mental state. These cases raise ethical issues on the role of AI in delicate situations.
We recommend you: Study alert to the difficulty of erase sensitive data in artificial intelligence
Where does AI come in?
If your brain is spiral, in a bad mood or you simply need a nap, there is a chatbot for that. But can AI really help your brain process complex emotions? Or are we simply outsourcing stress to silicon -based support systems that seem empathic?
And how exactly works with ia in our brain?
Most mental health applications with AI promise a kind of cognitive-behavioral therapy, which basically consists of a structured internal dialogue for inner chaos. Think about it as Marie Kondo, the Japanese expert in known order to help people keep only what “causes joy.” You identify useless patterns of thought as “I am a failure”, you examine them and decide if they help you or simply generate anxiety.
But can a chatbot help you reconfigure your thoughts? Surprisingly, science suggests that it is possible. Studies showed that digital forms of conversation therapy can reduce anxiety and depression symptoms, especially in mild or moderate cases. In fact, Woebot published a peer reviewed investigation that shows a reduction in depressive symptoms in young adults after just two weeks of chat.
These applications are designed to simulate therapeutic interaction, offering empathy, formulating guided questions and guiding the user through evidence -based tools. The goal is to help with decision -making and self -control, in addition to helping to calm the nervous system.
The neuroscience that supports cognitive-behavioral therapy is solid: it is about activating the brain executive control centers, helping us to divert our attention, challenge automatic thoughts and regulate our emotions.
The question is whether a chatbot can replicate this reliably and if our brain really believes.
The experience of a user and his possible implications for the brain
“I had a difficult week,” a friend told me recently. I asked him to try a mental health chatbot for a few days. He told me that the bot responded with an emoji of encouragement and an indication generated by his algorithm to prove a calm strategy adapted to his mood. Then, to her surprise, she helped her better at the end of the week. As a neuroscientific, I could not help asking me: what neurons of her brain were activated to help her feel calm?
This is not an isolated story. A growing number of user surveys and clinical trials suggest that interactions with chatbots based on cognitive-behavioral therapy can lead to short-term improvements in mood, concentration and even sleep. In random studies, users of mental health applications have reported a reduction in symptoms of depression and anxiety, results that are closely align with the way in which face-to-face cognitive-behavioral therapy influences the brain.
Several studies show that therapeutic chatbots can help people feel better. In a clinical trial, a chatbot called “Therabot” helped reduce symptoms of depression and anxiety to almost half, similar to what people with human therapists experience. Other research, including a review of more than 80 studies, discovered that Chatbots of AIs are especially useful to improve mood, reduce stress and even help people to sleep better. In a study, a chatbot surpassed a self -help book in the improvement of mental health after only two weeks.
Although people often report feeling better after using these chatbots, scientists have not yet confirmed exactly what happens in the brain during these interactions. In other words, we know that they work for many people, but we are still learning how and why.
Alert and risks signs
Applications such as WYSA have obtained the designation of innovative device of the FDA, a status that accelerates the development of promising technologies for serious diseases, which suggests that they could offer a real clinical benefit. Woebot, similarly, performs random clinical trials that show an improvement in symptoms of depression and anxiety in first -time mothers and university students.
While many mental health applications presume labels such as “clinically validated” or “approved by the FDA”, these statements are often not verified. An analysis of the main applications revealed that the majority made bold statements, but less than 22% cited real scientific studies to support them.
In addition, chatbots collect confidential information about mood metric, triggers and personal stories.
What would happen if this data ended in the hands of third parties, such as advertisers, employers or hackers, a scenario that has already occurred with genetic data? In a data filtration in 2023, almost 7 million users of the DNA 23Andme test company saw their DNA and personal data after hackers used previously filtered passwords to access their accounts. Subsequently, the regulators fined the company with more than 2 million dollars for not protecting user data.
Unlike clinical professionals, the bots are not subject to the ethics of the Ministry or to the Laws of Privacy of Medical Information. You may be receiving a form of cognitive-behavioral therapy, but you are also feeding a database.
And of course, the bots can guide you through breathing exercises or promote a cognitive reevaluation, but when you face a complexity or emotional crisis, they are often ineffective. Human therapists take advantage of nuances, past traumas, empathy and live feedback cycles. Can an algorithm say “I listen to you” with authentic understanding? Neuroscience suggests that the human connection of active support social brain networks that AI cannot access.
Therefore, although in mild or moderate cases, cognitive-behavioral therapy administered by Bots can offer a short-term relief of symptoms, it is important to be aware of its limitations. At the moment, combining bots with human attention, instead of replacing it, is the safest option.
*Pooja Shree Chettiar is a candidate for a PhD in Medical Sciences at Texas A&M University.
This text was originally published in The Conversation
Inspy, discover and share. Follow us and find what you are looking for on our Instagram!