Culture

Trump’s New AI Framework Raises Red Flags for Experts

On Friday, the Trump administration released its recommendations for Congress on a national policy regarding artificial intelligence. The four-page bulleted document outlines general ideas for a legislative framework. While on the surface they seem to be vague calls for safety and free speech, some AI ethics experts are crying foul.

The guidelines outline some protections for the public, while allowing AI companies to ramp up innovation without the “burden” of strict guardrails. The six objectives call for child safety requirements while also talking about how residents shouldn’t pay increased electricity for data center buildout, how the country can develop an AI-friendly workforce, and what state versus federal regulation on technology should look like. The framework says that Congress should make sure that state laws “do not govern areas better suited to the Federal Government or act contrary to the United States’ national strategy to achieve global AI dominance.” Overall, the framework recommends a light touch on AI regulation. Critics see the guidelines as a way Trump is trying to both protect Big Tech and gain control over which tech companies are targeted and censored.  

‘A poison pill for states’ rights’ 

A notable point of contention in the proposed framework calls on lawmakers to “preempt state laws that impose undue burdens” and “prevent a fragmented patchwork of state regulations.” President Donald Trump has tried to stifle efforts by states to regulate AI in the past, most recently with an executive order in December, saying that state legislation was too “cumbersome” and was not allowing companies to innovate.

“This roadmap is a poison pill for states’ rights,” says Rumman Chowdhury, a former U.S. science envoy for AI. “By dictating congressional behavior and again targeting state-level regulation, Trump is expanding presidential authority further.”

In one section, the framework says that “states should not be permitted to penalize AI developers for a third party’s unlawful conduct involving their models.” This is a red flag for critics, who say this could result in a way to shield AI companies from being held liable for harms.

“At a moment when a clear majority of Americans — across party lines — is asking for stronger guardrails on AI, this framework moves in the opposite direction, proposing to limit the ability of parents, consumers, and communities to hold technology companies accountable for the risks and harms their products cause,” says Alondra Nelson, who previously led the Biden administration’s Office of Science and Technology Policy.

The central criticism against the framework is Trump’s effort to suppress state power when it comes to regulating AI technology by asking for new legislation that would limit the reach of current and future law.

“There is deep irony in this,” says Nelson. “States have been acting as the laboratories of democracy they have always been, responding to real harms reported by their constituents, from algorithmic discrimination in hiring and lending to the exploitation of children by AI-powered platforms.”

The government claims that a patchwork of state laws creates confusion for business and ignores the global nature of AI development, an argument which one AI policy expert called “weak.” 

“There are many cases where we have states making their own laws — education, insurance, drug laws, and even reproductive care now — and companies seem to manage just fine,” says an AI policy expert who asked to remain anonymous because they hadn’t received permission from their employer to speak to the press. The expert said states can move quickly, focus on what’s important, and borrow and learn from each other when innovating on AI regulation. 

“In this very framework there are numerous other places — child safety, state use of AI, law enforcement use of AI — where the administration allows states to go their own way,” they point out.

And while the first part of the proposal focuses on “protecting children and empowering parents,” critics say its recommendations aren’t specifically geared to holding AI companies accountable for protecting children.

“It doesn’t include reference to stronger proposals such as removing liability shields for AI companies when their products lead to harm to minors,” says Steven Feldstein, technology researcher and author of The Rise of Digital Repression.

“It looks like more of the same for this administration,” Feldstein continued, summing it up as “light touch regulations on AI, keep states at bay from enacting their own rules, free up companies to innovate and trust they won’t release models that will bring harm, and vague details about how this will end up coming together.”

A Bid for More Control

Some critics who spoke with Rolling Stone said they feel like the proposal’s call for federal preemption is a red herring covering the Trump administration’s real goal of expanding presidential authority. 

“The federal government wants more centralized power over how the companies design their systems,” stated one expert.

Chowdhury agrees. “This AI bill should be viewed as part of his ongoing strategy to consolidate power in his presidency,” she says, bringing up Trump’s executive order from December, which she described as the president mandating “a list of ‘onerous’ state-level legislation that he means to attack.”

Another section of the framework that has raised alarms is onewhich addresses preventing censorship and protecting free speech.

“Congress should prevent the United States government from coercing technology providers, including AI providers, to ban, compel, or alter content based on partisan or ideological agendas,” reads the proposal. 

Critics see this language as intentionally vague, which leaves the door open for Trump to be the judge and jury of what he does or doesn’t like, without any specific standard, giving him a type of invisible control over companies. As the AI policy expert who asked to remain nameless characterizes it: “By threatening those who develop AI models with vague and incoherent language about ‘ideological bias’ that cannot be evaluated in any meaningful way, the administration is saying, ‘I, and only I, will decide what’s appropriate for your models to produce, and I’ll use whatever rationale I feel like to do so.’”

Nelson says that acting as if AI tools and systems are completely neutral in their ideology, betrays a fundamental misunderstanding of generative AI.

“Every model encodes assumptions, every tool reflects choices, and every output carries a point of view,” says Nelson. “There is no neutral baseline to protect. There is only transparency about those choices, or the lack of it — along with robust laws to ensure this.”

Trending Stories

Additionally, Nelson points to a recent NBC News poll that found the majority of registered voters believe the risks of AI outweigh its benefits. 

“Americans are telling us, clearly and consistently, what they want: safe, ethical, and accountable AI,” says Nelson. “This framework offers them something else entirely.”

Show More

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
Close

Adblock Detected

Our content is free because of ads. Please support New Trend by disabling your ad blocker.

I've Whitelisted New Trend