Character.AI Gave Up on AGI. Now It’s Selling Stories

0
7


“AI is expensive. Let’s be honest about that,” Anand says.

Growth vs. Safety

In October 2024, the mother of a teen who died by suicide filed a wrongful death suit against Character Technologies, its founders, Google, and Alphabet, alleging the company targeted her son with “anthropomorphic, hypersexualized, and frighteningly realistic experiences, while programming [the chatbot] to misrepresent itself as a real person, a licensed psychotherapist, and an adult lover.” At the time, a Character.AI spokesperson told CNBC that the company was “heartbroken by the tragic loss” and took “the safety of our users very seriously.”

The tragic incident put Character.AI under intense scrutiny. Earlier this year, US senators Alex Padilla and Peter Welch wrote a letter to several AI companionship platforms, including Character.AI, highlighting concerns about “the mental health and safety risks posed to young users” of the platforms.

“The team has been taking this very responsibly for almost a year now,” Anand tells me. “AI is stochastic, it’s kind of hard to always understand what’s coming. So it’s not a one time investment.”

That’s critically important because Character.AI is growing. The startup has 20 million monthly active users who spend, on average, 75 minutes a day chatting with a bot (a “character” in Character.AI parlance). The company’s user base is 55 percent female. More than 50 percent of its users are Gen Z or Gen Alpha. With that growth comes real risk—what is Anand doing to keep his users safe?

“[In] the last six months, we’ve invested a disproportionate amount of resources in being able to serve under 18 differently than over 18, which was not the case last year,” Anand says. “I can’t say, ‘Oh, I can slap an 18+ label on my app and say use it for NSFW.’ You end up creating a very different app and a different small-scale platform.”

More than 10 of the company’s 70 employees work full-time on trust and safety, Anand tells me. They’re responsible for building safeguards like age verification, separate models for users under 18, and new features such as parental insights, which allow parents to see how their teens are using the app.

The under-18 model launched last December. It includes “a narrower set of searchable Characters on the platform,” according to company spokesperson Kathryn Kelly. “Filters have been applied to this set to remove Characters related to sensitive or mature topics.”

But Anand says AI safety will take more than just technical tweaks. “Making this platform safe is a partnership between regulators, us, and parents,” Anand says. That’s what makes watching his daughter chat with a Character so important. “This has to stay safe for her.”

Beyond Companionship

The AI companionship market is booming. Consumers worldwide spent $68 million on AI companionship in the first half of this year, a 200 percent increase from last year, according to an estimate cited by CNBC. AI startups are gunning for a slice of the market: xAI released a creepy, pornified companion in July, and even Microsoft bills its Copilot chatbot as an AI companion.

So how does Character.AI stand out in a crowded market? It takes itself out of it entirely.

LEAVE A REPLY

Please enter your comment!
Please enter your name here