As AI tools like ChatGPT become increasingly popular for those seeking personal therapy and emotional support, the dangers this can present – especially for young people – have made numerous headlines. However, not as much attention is paid to companies’ use of generative AI to assess the psychological well-being of their employees and provide them with emotional support in the workplace.
Since the global shift to remote work caused by the pandemic, sectors ranging from healthcare to human resources to customer service have seen a significant increase in the use of AI-enabled systems designed to analyze the emotional state of employees, identify those suffering emotional distress, and provide support.
This new frontier represents a huge leap forward over using conventional chat tools or individual therapy apps for psychological support. As researchers studying how AI affects emotions and relationships in the workplace, we are concerned about the crucial questions this change raises: What happens when the employer has access to employees’ emotional data? Can AI really provide the kind of emotional support they need? What happens if the AI fails? And if something goes wrong, who is responsible?
The difference in the workplace
Many companies began offering automated counseling programs with many similarities to in-person therapy apps, a practice that demonstrated some benefits. In preliminary studies, researchers found that in a doctor-like virtual conversation environment, AI-generated responses made people feel more heard than responses from a human therapist. A study that compared AI chatbots with human psychotherapists found that the bots were “at least as empathetic as the therapists’ responses, and sometimes even more so.”
This may seem surprising at first glance, but AI offers constant attention and consistent supportive responses. He doesn’t interrupt, he doesn’t judge, and he doesn’t get frustrated when the same concerns are repeated. For some employees, especially those dealing with stigmatized issues like mental health or workplace conflict, this consistency provides greater security than human interaction.
But for others, it raises new concerns. A 2023 study found that workers were reluctant to participate in company-driven mental health programs due to concerns about confidentiality and stigma. Many feared that his revelations could negatively affect their careers.
Are you interested in: Where does human thinking end and AI begin? A new protocol seeks to distinguish them
Other workplace AI systems go much deeper, analyzing employee communication in real time: emails, Slack conversations, and video calls. This analysis creates detailed records of employees’ emotional state, stress patterns, and psychological vulnerabilities. All of this data resides in corporate systems where privacy protection is often unclear and often favors the employer’s interests.
Global employee support provider Workplace Options has partnered with Wellbeing.ai to deploy a platform that uses facial analysis to track emotional state across 62 categories. Generates well-being scores that organizations can use to detect stress or morale issues. This approach integrates AI into emotionally sensitive aspects of work, dangerously blurring the line between support and surveillance.
In this scenario, the same AI that helps employees feel heard and supported also generates unprecedented insights into the emotional dynamics of the workforce. Organizations can now track which departments are showing signs of burnout, identify employees at risk of quitting, and monitor emotional responses to organizational changes.
However, this type of tool also transforms emotional data into strategic information for management, which poses a real dilemma for many companies. While the most innovative organizations are establishing strict data governance—limiting access to anonymized patterns rather than individual conversations—others are facing the temptation to use emotional information for performance evaluation and personnel decisions.
The continuous surveillance carried out by some of these systems can help ensure that companies do not neglect a vulnerable group or individual, but it can also lead people to monitor their own actions to avoid attracting attention.
Research on AI monitoring in the workplace has shown how employees experience greater stress and modify their behavior when they know that management can review their interactions. This monitoring undermines the sense of security necessary for people to confidently seek help.
Another study found that these systems increased employee distress due to loss of privacy and fear of the consequences if the system identified them as stressed or suffering from burnout.
We recommend: Artificial intelligence will create jobs for educators, linguists and psychologists
When artificial empathy faces real consequences
These findings are important because, without a doubt, the consequences are even greater in the workplace than in the personal sphere. AI systems lack the nuanced judgment needed to distinguish between accepting someone as a person and endorsing harmful behavior. In organizational contexts, this means that an AI could inadvertently validate unethical work practices or fail to recognize when human intervention is crucial.
And that’s not the only way AI systems can get it wrong. One study found that emotion-tracking AI tools had a disproportionate impact on employees of color, trans and gender nonbinary people, and people with mental illness. Interviewees expressed deep concern about how these tools could misinterpret an employee’s mood, tone, or verbal cues due to ethnic, gender, and other biases inherent in AI systems.
There is also a problem of authenticity. Research shows that when people know they are talking to an AI system, they rate the same empathetic responses as less authentic than when they attribute them to humans. However, some employees prefer AI precisely because they know it is not human. The feeling that these tools protect your anonymity and freedom from social consequences is attractive to some, even if it is just a feeling.
Technology also raises questions about the role of managers. If employees consistently prefer AI for emotional support, what does this reveal about organizational leadership? Some companies use the information provided by AI to train their managers in emotional intelligence, turning technology into a mirror that reflects the deficiencies of human skills.
The way forward
The debate over AI emotional support in the workplace is not just about the technology, but about the type of companies people want to work for. As these systems become more widespread, we believe it is important to address fundamental questions: Should employers prioritize authentic human connection over constant availability? How can individual privacy be balanced with organizational information? Can organizations harness the empathetic capabilities of AI while preserving the trust necessary for meaningful working relationships?
The most successful implementations recognize that AI should not replace human empathy, but rather create the conditions for it to flourish. When AI takes care of the routine emotional labor—the late-night anxiety attacks, the pre-meeting stress assessments, the processing of difficult feedback—managers have more time to make deeper, more authentic connections with their teams.
But this requires careful implementation. Companies that establish clear ethical boundaries, strong privacy protections, and explicit policies on how emotional data is used are more likely to avoid the problems of these systems, as are those that recognize when human judgment and authentic presence remain irreplaceable.
*Nelson Phillips He is a Distinguished Professor of Technology Management and Fares Ahmad is a PhD candidate in Technology Management, both from the University of California, Santa Barbara.
This text was originally published in The Conversation
Do you like photos and news? Follow us on our Instagram












































