State attorneys general warn Microsoft, OpenAI, Google, and other AI giants to fix ‘delusional’ outputs

0
9


After a string of disturbing mental health incidents involving AI chatbots, a group of state attorneys general sent a letter to the AI industry’s top companies, with a warning to fix “delusional outputs” or risk being in breach of state law. 

The letter, signed by dozens of AGs from U.S. states and territories with the National Association of Attorneys General, asks the companies, including Microsoft, OpenAI, Google, and 10 other major AI firms, to implement a variety of new internal safeguards to protect their users. Anthropic, Apple, Chai AI, Character Technologies, Luka, Meta, Nomi AI, Perplexity AI, Replika, and xAI were also included in the letter.

The letter comes as a fight over AI regulations has been brewing between state and federal government.

Those safeguards include transparent third-party audits of large language models that look for signs of delusional or sycophantic ideations, as well as new incident reporting procedures designed to notify users when chatbots produce psychologically harmful outputs. Those third parties, which could include academic and civil society groups, should be allowed to “evaluate systems pre-release without retaliation and to publish their findings without prior approval from the company,” the letter states.

“GenAI has the potential to change how the world works in a positive way. But it also has caused — and has the potential to cause—serious harm, especially to vulnerable populations,” the letter states, pointing to a number of well-publicized incidents over the past year — including suicides and murder — in which violence have been linked to excessive AI use,” the letter states. “In many of these incidents, the GenAI products generated sycophantic and delusional outputs that either encouraged users’ delusions or assured users that they were not delusional.”

AGs also suggest companies treat mental health incidents the same way tech companies handle cybersecurity incidents — with clear and transparent incident reporting policies and procedures.

Companies should develop and publish “detection and response timelines for sycophantic and delusional outputs,” the letter states. In a similar fashion to how data breaches are currently handled, companies should also “promptly, clearly, and directly notify users if they were exposed to potentially harmful sycophantic or delusional outputs,” the letter says. 

Techcrunch event

San Francisco
|
October 13-15, 2026

Another ask is that the companies develop “reasonable and appropriate safety tests” on GenAI models to “ensure the models do not produce potentially harmful sycophantic and delusional outputs.” These tests should be conducted before the models are ever offered to the public, it adds.  

TechCrunch was unable to reach Google, Microsoft, or OpenAI for comment prior to publication. The article will be updated if the companies respond.

Tech companies developing AI have had a much warmer reception at the federal level.

The Trump administration has made it known it is unabashedly pro-AI, and, over the past year, multiple attempts have been made to pass a nationwide moratorium on state-level AI regulations. So far, those attempts have failed—thanks, in part, to pressure from state officials.

Not to be deterred, Trump announced Monday he plans to pass an executive order next week that will limit the ability of states to regulate AI. The president said in a post on Truth Social he hoped his EO would stop AI from being “DESTROYED IN ITS INFANCY.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here