U.S. President Donald Trump speaks during an event to sign the Laken Riley Act, at the White House in Washington on Jan. 29, 2025.
Elizabeth Frantz | Reuters
The White House should keep “key rules” in place for artificial intelligence testing and transparency, according to a letter on Thursday from the Consumer Federation of America and Mozilla.
The letter, which was viewed by CNBC, follows President Donald Trump’s decision to revoke former President Biden’s 2023 executive order on AI requiring new safety assessments, equity and civil rights guidance, and research on AI’s impact on the labor market. It was addressed to several officials, including David Sacks, the White House’s AI and crypto czar, and Mike Waltz, the new national security advisor.
Biden’s order mandated that developers of large-scale AI systems, particularly ones that may pose a risk to U.S. national security, public health or the economy, share safety test results with the U.S. government before releasing the technology to the public.
Signers of the letter, including the Center for Digital Democracy and the National Consumer Law Center, noted that President Trump’s executive order would revise rules “requiring that the federal government make sure AI systems are tested and disclosed before they’re used on consumers.” Those systems, they said, are used to help the Veteran Affairs Department prioritize care and review retirement benefits.
“Without guardrails like testing and transparency on an AI system before it’s used — guardrails so basic that any engineer should be ashamed to release a product without them — seniors, veterans, and consumers will have their benefits improperly altered and their health endangered,” they wrote. “We call on you to keep key rules about testing and transparency for safety- and rights-impacting AI in place.”
They said the bar set by the prior rules “is not high” and “is the least our seniors, veterans, and everyday consumers deserve.”
After Biden’s order, many civil society leaders praised the 111-page document as a step in the right direction but said it didn’t go far enough to recognize and address real-world harms that stem from AI models. But many tech leaders worried that the rules would hinder innovation. Still, it was widely viewed as an effective compromise.
AI has long been controversial due to potentially harmful ripple effects, especially for vulnerable and minority populations. Police use of AI has led to a number of wrongful arrests, investigations have revealed car insurance algorithms to be weighted against marginalized communities, and research has found significant racial disparities in mortgage underwriting
The organizations involved in the writing of Thursday’s letter said the former executive order’s safety rules applied to large enterprises that built AI systems impacting “large numbers of people, often at their most vulnerable.” Using taxpayer dollars on untested AI systems, they said, could lead to “further waste, fraud, and abuse.”
“The issues we are highlighting here are not about ‘ideological bias’ or ‘engineered social agendas’ as identified in President Trump’s latest executive order on AI,” the letter said. “Rather, the issues at play here are about basic principles of safety engineering that have been vital for responsible adoption of every other technology that has impacted millions of people, from how we test our planes to how we secure our software.”
WATCH: DeepSeek and distillation