AI content moderator, Musubi, team photo.
Courtesy: Tom Quisel | Musubi
As policies for content moderation and fact-checking enter a new era, one startup is turning to artificial intelligence, rather than humans, to enforce trust and safety measures.
Musubi, a startup that uses AI to moderate online content, has raised $5 million in a seed round, the company told CNBC. The round was led by J2 Ventures, with participation from Shakti Ventures, Mozilla Ventures and pre-seed investor J Ventures, the startup said.
The company was co-founded by Tom Quisel, who was previously chief technical officer at Grindr and OkCupid. Quisel said he saw an opportunity to use AI, including large language models, or LLMs, alongside human moderators to help social and dating apps “stay ahead” of bad actors. Musubi’s AI systems understand users’ tendencies better and more accurately tell whether there’s bad intentions with users’ content.
“You pretty universally hear that trust and safety teams are not happy with the quality of results from moderators, and it’s not to blame moderators,” said Quisel, who co-founded the company alongside Fil Jankovic and Christian Rudder. “It’s exactly the kind of scenario where people just make mistakes. It’s unavoidable, so this really creates an opportunity for AI and automation to do a better job.”
During his time at OkCupid, Quisel said it was a “Sisyphean struggle” moderating bad actors. The effort required OkCupid to pull engineers, data scientists and other product staffers off core projects to work on trust and safety, but blocking one pattern of attack never lasted long enough, Quisel said.
“They would always figure out how to get around the defenses we built,” he said.
Attacks online can include spamming, fraud, harassment or posting illegal or age-inappropriate content. This is the type of content that has historically been removed from platforms with the help of human decision-makers.
Musubi claims its PolicyAI and AIMod AI systems work together to deliver decisions with an error rate 10 times lower than that of a human moderator. The company said it also plans to use its AI to identify performance issues and inherent bias with human moderators.
PolicyAI acts as a “first line of defense,” Quisel said. It uses LLMs to search for red flags that may violate a platform’s policies. Then, the red-flagged posts go over to AIMod, which makes a moderation choice that simulates what a human would do with a flagged post.
Musubi’s emergence comes on the heels of an industry shift away from overly moderating online content.
Most notably, Meta CEO Mark Zuckerberg in January announced an end to the company’s third-party fact-checking in favor of a system it calls Community Notes that will rely on users to moderate one another’s content. It’s a system that was first introduced by X, Elon Musk’s microblogging service.
Thus far, Musubi has attracted the likes of Grindr and Bluesky among its clients.
Bluesky needed to expand its moderation capabilities quickly after seeing its user growth skyrocket in the wake of the 2024 election. As Bluesky’s base rose to more than 20 million users, it saw numerous content reports fly in for its moderators. Musubi’s team of 10 worked around the clock to deliver a scalable solution for the platform.
“I like that Musubi accurately detects fake and scam accounts in moments,” Aaron Rodericks, Bluesky’s head of trust and safety, said in a statement.