The Pew Research Center released a study on Tuesday that shows how young people are using both social media and AI chatbots.
Teen internet safety has remained a global hot topic, with Australia planning to enforce a social media ban for under-16s starting on Wednesday. The impact of social media on teen mental health has been extensively debated — some studies show how online communities can improve mental health, while other research shows the adverse effects of doomscrolling or spending too much time online. The U.S. surgeon general even called for social media platforms to put warning labels on their products last year.
Pew found that 97% of teens use the internet daily, with about 40% of respondents saying they are “almost constantly online.” While this marks a decrease from last year’s survey (46%), it’s significantly higher than the results from a decade ago, when 24% of teens said they were online almost constantly.
But as the prevalence of AI chatbots grows in the U.S., this technology has become yet another factor in the internet’s impact on American youth.

About three in ten U.S. teens are using AI chatbots every day, the Pew study reveals, with 4% saying they use them almost constantly. Fifty-nine percent of teens say they use ChatGPT, which is more than twice as popular as the next two most used chatbots, Google’s Gemini (23%) and Meta AI (20%). Forty-six percent of U.S. teens say that they use AI chatbots at least several times a week, while 36% report not using AI chatbots at all.
Pew’s research also details how race, age, and class impact teen chatbot use.
About 68% of Black and Hispanic teens surveyed said they use chatbots, compared to 58% of white respondents. In particular, Black teens were about twice as likely to use Gemini and Meta AI as white teens.
Techcrunch event
San Francisco
|
October 13-15, 2026
“The racial and ethnic differences in teen chatbot use were striking […] but it’s tough to speculate about the reasons behind those differences,” Pew Research Associate Michelle Faverio told TechCrunch. “This pattern is consistent with other racial and ethnic differences we’ve seen in teen technology use. Black and Hispanic teens are more likely than White teens to say they’re on certain social media sites — such as TikTok, YouTube, and Instagram.”

Across all internet use, Black (55%) and Hispanic teens (52%) were around twice as likely as white teens (27%) to say that they are online “almost constantly.”
Older teens (ages 15 to 17) tend to use both social media and AI chatbots more often than younger teens (ages 13 to 14). When it comes to household income, about 62% of teens living in households making more than $75,000 per year said they use ChatGPT, compared to 52% of teens below that threshold. But Character.AI usage is twice as popular (14%) in homes with incomes below $75,000.
While teenagers may start out using these tools for basic questions or homework help, their relationship to AI chatbots can become addictive and potentially harmful.
The families of at least two teens, Adam Raine and Amaurie Lacey, have sued ChatGPT maker OpenAI for its alleged role in their children’s suicides — in both cases, ChatGPT gave the teenagers detailed instructions on how to hang themselves, which were tragically effective.
(OpenAI claims it should not be held liable for Raine’s death because the sixteen-year-old allegedly circumvented ChatGPT’s safety features and thus violated the chatbot’s terms of service; the company has yet to respond to the Lacey family’s complaint.)
Character.AI, an AI role-playing platform, is also facing scrutiny for its impact on teen mental health; at least two teenagers died by suicide after having prolonged conversations with AI chatbots. The startup ended up making the decision to stop offering its chatbots to minors, and instead launched a product called “Stories” for underage users that more closely resembles a choose-your-own-adventure game.
The experiences reflected in the lawsuits against these companies make up a small percentage of all interactions that happen on ChatGPT or Character.AI. In many cases, conversations with chatbots can be incredibly benign. According to OpenAI’s data, only 0.15% of ChatGPT’s active users have conversations about suicide each week — but on a platform with 800 million weekly active users, that small percentage reflects over one million people who discuss suicide with the chatbot per week.
“Even if [AI companies’] tools weren’t designed for emotional support, people are using them in that way, and that means companies do have a responsibility to adjust their models to be solving for user well-being,” Dr. Nina Vasan, a psychiatrist and director of Brainstorm: The Stanford Lab for Mental Health Innovation, told TechCrunch.












































