A group of ex-OpenAI employees on Friday filed a proposed amicus brief in support of Elon Musk in his lawsuit against OpenAI, opposing OpenAI’s planned conversion from a nonprofit to a for-profit corporation.
The brief, filed by Harvard law professor and Creative Commons founder Lawrence Lessig, names 12 former OpenAI employees: Steven Adler, Rosemary Campbell, Neil Chowdhury, Jacob Hilton, Daniel Kokotajlo, Gretchen Krueger, Todor Markov, Richard Ngo, Girish Sastry, William Saunders, Carrol Wainwright, and Jeffrey Wu. It makes the case that, if OpenAI’s non-profit ceded control of the organization’s business operations, it would “fundamentally violate its mission.”
Several of the ex-staffers have spoken out against OpenAI’s practices publicly before. Krueger has called on the company to improve its accountability and transparency, while Kokotajlo and Saunders previously warned that OpenAI is in a “reckless” race for AI dominance. Wainwright has said that OpenAI “should not [be trusted] when it promises to do the right thing later.”
In a statement, an OpenAI spokesperson said that OpenAI’s nonprofit “isn’t going anywhere” and that the organization’s mission “will remain the same.”
“Our board has been very clear,” the spokesperson told TechCrunch via email. “We’re turning our existing for-profit arm into a public benefit corporation (PBC) — the same structure as other AI labs like Anthropic — where some of these former employees now work — and [Musk’s AI startup] xAI.”
OpenAI was founded as a nonprofit in 2015, but it converted to a “capped-profit” in 2019, and is now trying to restructure once more into a PBC. When it transitioned to a capped-profit, OpenAI retained its nonprofit wing, which currently has a controlling stake in the organization’s corporate arm.
Musk’s suit against OpenAI accuses the startup of abandoning its nonprofit mission, which aimed to ensure its AI research benefits all humanity. Musk had sought a preliminary injunction to halt OpenAI’s conversion. A federal judge denied the request, but permitted the case to go to a jury trial in spring 2026.
According to the ex-OpenAI employees’ brief, OpenAI’s present structure — a nonprofit controlling a group of other subsidiaries — is a “crucial part” of its overall strategy and “critical” to the organization’s mission. Restructuring that removes the nonprofit’s controlling role would not only contradict OpenAI’s mission and charter commitments, but would also “breach the trust of employees, donors, and other stakeholders who joined and supported the organization based on these commitments,” asserts the brief.
“OpenAI committed to several key principles for executing on [its] mission in their charter document,” the brief reads. “These commitments were taken extremely seriously within the company and were repeatedly communicated and treated internally as being binding. The court should recognize that maintaining the nonprofit’s governance is essential to preserving OpenAI’s unique structure, which was designed to ensure that artificial general intelligence benefits humanity rather than serving narrow financial interests.”
Artificial general intelligence, or AGI, is broadly understood to mean AI that can complete any task a human can.
According to the brief, OpenAI often used its structure as a recruitment tool — and repeatedly assured staff that the nonprofit control was “critical” in executing its mission. The brief recounts an OpenAI all-hands meeting toward the end of 2020 during which OpenAI CEO Sam Altman allegedly stressed that the nonprofits’ governance and oversight were “paramount” in “guaranteeing that safety and broad societal benefits were prioritized over short-term financial gains.”
“In recruiting conversations with candidates, it was common to cite OpenAI’s unique governance structure as a critical differentiating factor between OpenAI and competitors such as Google or Anthropic and an important reason they should consider joining the company,” reads the brief. “This same reason was also often used to persuade employees who were considering leaving for competitors to stay at OpenAI — including some of us.”
The brief warns that, should OpenAI be allowed to convert to a for-profit, it might be incentivized to “[cut] corners” on safety work and develop powerful AI “concentrated among its shareholders.” A for-profit OpenAI would have little reason to abide by the “merge and assist” clause in OpenAI’s current charter, which pledges that OpenAI will stop competing with and assist any “value-aligned, safety-conscious” project that achieves AGI before it does, asserts the brief.
The ex-OpenAI employees, some of whom were research and policy leaders at the company, join a growing cohort voicing strong opposition to OpenAI’s transition.
Earlier this week, a group of organizations, including nonprofits and labor groups like the California Teamsters, petitioned California Attorney General Rob Bonta to stop OpenAI from becoming a for-profit. They claimed the company has “failed to protect its charitable assets” and is actively “subverting its charitable mission to advance safe artificial intelligence.”
Encode, a nonprofit organization that co-sponsored California’s ill-fated SB 1047 AI safety legislation, cited similar concerns in an amicus brief filed in December.
OpenAI has said that its conversion would preserve its nonprofit arm and infuse it with resources to be spent on “charitable initiatives” in sectors such as healthcare, education, and science. In exchange for its controlling stake in OpenAI’s enterprise, the nonprofit would reportedly stand to reap billions of dollars.
“We’re actually getting ready to build the best-equipped nonprofit the world has ever seen — we’re not converting it away,” the company wrote in a series of posts on X on Wednesday.
The stakes are high for OpenAI, which needs to complete its for-profit conversion by the end of this year or next or risk relinquishing some of the capital it has raised in recent months, according to reports.