The technology company OpenAI presented a legislative proposal in California focused on safety for AI chatbots for children, which marks the first time that the company enters the realm of state policy, just as it faces a lawsuit for the death of a minor who interacted for months with ChatGPT.
The measure baptized as ‘AI Companion Chatbot Safety Act’ (‘AI Chatbot Safety Act’, in Spanish) proposes security controls for AI chatbots, such as its flagship ChatGPT model.
The measure seeks to complement the existing security and mental health protections for chatbots, contemplated in legislation approved by the governor of California, Gavin Newsom, last October, as an OpenAI advisor informed the magazine Politico.
The technology company would also be in dialogue with various interested parties, including other technology companies, industry groups and user rights advocates, to advance the legislation.
OpenAI’s lawyer, Tom Hiltachk, has been in charge of presenting the initiative that, since Monday night, was up for public consideration on the website of the California Attorney General’s Office.
In order to be on the ballot for the November 2026 elections, the initiative will have to have the required number of signatures to support it.
Recommended for you: What has Europe done to regulate children’s access to social networks?
The initiative competes with another AI safety proposal for children presented by Common Sense Media, a nonprofit dedicated to children’s online safety, which proposes stricter measures than those presented by OpenAI.
The safety of minors using chatbots has become a priority in California. Last October, Newsom signed a law requiring platforms to remind users that they are interacting with a chatbot and not a human.
For underage users, the notification will appear every three hours for users. In addition, technology companies must establish protocols to prevent, address and report suicide.
OpenAI is in a legal battle over the death of 16-year-old Adam Raine, who took his own life after months of interacting with ChatGPT last April.
The legal complaint filed in a San Francisco (California) court states that “ChatGPT actively helped Adam explore methods of suicide,” accusing OpenAI, GPT-4o’s parent company, and Altman of wrongful death.
The technology company, for its part, has denied being responsible for the suicide, arguing that “the injuries and damages alleged by the plaintiffs were caused or contributed to (…) by improper, unauthorized, unpredictable and inappropriate use of ChatGPT” by the young man.
With information from EFE
Follow us on Google News to always stay informed












































