Anthropic is now the third artificial intelligence company whose chatbot conversations with users have become inadvertently accessible in Google search results.
The conversations seemed to be those that the users of their bot, Claude, chose to “share.” Like the functions to share from Chatgpt and XAI, which published hundreds of thousands of conversations that could later be searched in Google, Claude’s “Share” button created a website dedicated to hosting the conversation. This allowed users to share the link to the conversation page. However, unlike Openai and XAI, Anthropic said he blocked Google trackers, which apparently prevented those pages from indexing. Despite this, hundreds of Claude conversations were still available in the search results (they have already been eliminated).
Claude Chatbot’s visible conversations included indications of the Anthropic team so that the chatbot created applications, games and a “Anthropic comedy office simulator”. Other users commissioned Claude to write a book, program and complete corporate tasks that revealed the names and emails of the staff. Several transcripts allowed users to identify by name or by the details shared in the indication. Google estimated to have indexed just under 600 conversations.
Anthropic spokeswoman, Gabby Curtis, declared a Forbes that Claude’s conversations were only visible on Google and Bing because users had published links to them online or on social networks. “We give users the control to publicly share their Claude conversations and, in compliance with our privacy principles, we do not share chat directories or maps of chats shared sites shared with search engines such as Google and we actively prevent them from tracking our site,” said Curtis in an email to Forbes.
Read more: these are the best paid players of the NFL in 2025
Hundreds of anthropic chatbots transcripts appeared in the search for Google
However, Forbes He spoke with one of the users identified by his public message from Claude, who said he had not published the chatbot conversation related to online work. The user asked not to be identified due to his work.
At the close of this edition, Google had not answered the questions about the reason for the appearance of transcripts in the results, although Anthropic had declared that he actively blocked the trackers. On Monday, the previously visible results disappeared from the Google search results page.
The appearance of chatbots transcriptions in search results has become a trend in recent months. In July, Openai apologized after users realized that many of their “shared” chatpt transcripts could be searched online. In August, Forbes He discovered that hundreds of thousands of Grok transcripts, XAI, were also indexed and were sought, without the knowledge or consent of their users. Grok transcripts included representations of sexual violence, instructions to manufacture drugs and bombs, and a plan generated by Grok to kill Elon Musk. (XAI did not respond to a request for comments at that time).
Openai had offered users the option to make Chatgpt conversations “detectable” and warned them that this would make them visible on Google, while Grok did not warn that shared conversations could be indexed by search engines. Openai canceled his sharing button in August, describing it as “a short duration experiment.” “We believe that this function generated too many opportunities for people to accidentally share information that they did not intend, so we eliminated it,” said Dane Stuckey, Openai Information Security Director, in a publication in X. OpenAI also indicated that he was working to eliminate chatgpt conversations from search engines.
Like Xai, Anthropic did not warn users that their conversations could make public. However, unlike XAI, Anthropic maintained the privacy of the archives that users uploaded to Claude, even when included in public chats, so that the documents that included potentially owner and business information were not exposed. In some cases reviewed by Forbes Bot responses to these documents included direct quotes from their fragments, which were published and could be seen in transcripts.
Anthropic said that he instructs the web trackers not to index these shared pages in their Robots.txt file, an online protocol used by websites of websites to instructions the search engines, but that does not guarantee that the application is observed. The company’s own company has received complaints from websites owners for the “atrocious” data extraction from its own web tracers, and some claim that Anthropic ignored the robots.txt instructions. The Reddit social network filed a lawsuit against Anthropic in June for this extraction (Anthropic said at that time that he respected the editors and tried not to be “intrusive or disruptive”). The company of AI reached an agreement of $ 1.5 billion with the authors last week for the accusations of pirate books to train their AI models (Anthropic did not admit any irregularity).
The artificial intelligence laboratory based in the Bay area, which has just collected $ 13,000 million with an assessment of 183,000 million, has also recently modified its own rules on the use of user conversations with Claude. Last month, in a review of his privacy policy, he announced that he planned to use users’ chats to train their artificial intelligence models, unless they decided not to do so.
This article was originally published by Forbes Us.
You may be interested: Photo that supposedly shows a birthday message for Epstein signed by Trump was delivered to Congress