OpenAI lobbies Trump admin to remove guardrails for the industry

0
4


US President Trump gestures as CEO of Open AI Sam Altman speaks in the Roosevelt Room at the White House on January 21, 2025, in Washington, DC. 

Jim Watson | Afp | Getty Images

After President Trump, in one of his initial actions upon returning to the White House, revoked the country’s first-ever artificial intelligence executive order, OpenAI got to work making sure it would have a seat at the table when it comes to developing and regulating the nascent technology.

On Thursday, OpenAI submitted its proposal to the U.S. government, emphasizing the need for speed in AI advancement and a light hand from regulators while highlighting its take on the dangers of AI technology coming out of China.

The proposal underscores OpenAI’s direct effort to influence the government’s coming “AI Action Plan,” a tech strategy report to be drafted by the Office of Science and Technology Policy and submitted to President Trump by July.

In January, President Trump threw out the AI executive order signed by President Biden in October 2023, which was titled “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” President Trump subsequently issued a new executive order, declaring that “it is the policy of the United States to sustain and enhance America’s global AI dominance.” He mandated that an AI Action Plan be submitted to the President within 180 days.

OpenAI, which as of last month was reportedly close to finalizing a $40 billion investment from SoftBank at a $260 billion valuation, is in a precarious position with Trump’s second White House. While the company was part of Trump’s Stargate announcement and the billions of dollars of AI infrastructure investment tied to the plan, OpenAI is in a heated legal and public relations battle with Elon Musk, who owns a rival AI startup and is one of Trump’s top advisors.

OpenAI told reporters last month that the company is considering building data center campuses in 16 states that have indicated “real interest” in the project.

In its proposal on Thursday, OpenAI expressed its distaste for the current level of regulation in AI, calling for “the freedom to innovate in the national interest” and a “voluntary partnership between the federal government and the private sector” rather than “overly burdensome state laws.”

The company wrote that the federal government should work with both leading AI developers and startups “on a purely voluntary and optional basis.”

OpenAI also said the country needs an “export control strategy” that deals in U.S.-developed AI and promotes the global adoption of American AI systems.

Removing guardrails

OpenAI recommended that the government allow federal agencies to “test and experiment with real data” and potentially grant a temporary waiver for FedRAMP, the Federal Risk and Authorization Management Program. It also said that the government should “modernize” the process for AI companies clearing approval for federal security regulations by “establishing a faster, criteria-based path for approval of AI tools.”

The company said its recommendations could allow new AI services to be accessed by the government roughly 12 months earlier than current processes. But industry experts have regularly expressed concerns that speedy government adoption of AI comes at the potential expense of safety, effectiveness and questions of whether the tech is needed at all in certain cases.

OpenAI recommended that the government partner with the private sector to develop AI for national security use.

“The government needs models trained on classified datasets that are fine-tuned to be exceptional at national security tasks for which there is no commercial market — such as geospatial intelligence or classified nuclear tasks,” OpenAI wrote.

In January, OpenAI released ChatGPT Gov, a product it built specifically for U.S. government use.

OpenAI launches new agent-building toolkit

In the proposal, OpenAI also said the U.S. needs “a copyright strategy that promotes the freedom to learn” and on “preserving American AI models’ ability to learn from copyrighted material.”

“America has so many AI startups, attracts so much investment, and has made so many research breakthroughs largely because the fair use doctrine promotes AI development,” OpenAI wrote.

Since its public release in late 2022, OpenAI’s ChatGPT chatbot has been crawling the web to provide answers to user queries, allegedly relying, in part, on copy pulled directly from news stories. The company has been sued for copyright infringement by the Center for Investigative Reporting, the country’s oldest nonprofit newsroom, as well as by The New York Times, the Chicago Tribune and the New York Daily News. It’s also been sued by authors and visual artists.

In spelling out its view on the risks posed by China, OpenAI wrote that DeepSeek — the Chinese AI startup — costs users their privacy and security. In January, DeepSeek’s app went viral in the U.S., surpassing ChatGPT for a time at the top of Apple’s App Store.

One big concern for AI experts and investors in the U.S. is that DeepSeek’s model was reportedly developed at a fraction of the cost rival models from OpenAI, Anthropic, Google and others.

“While America maintains a lead on AI today, DeepSeek shows that our lead is not wide and is narrowing,” OpenAI wrote in the report.

WATCH: ‘Imperative’ that Western world leads on AI

Imperative that Western world leads on AI, says Bret Taylor


LEAVE A REPLY

Please enter your comment!
Please enter your name here