AI11 min readThe Verge AI

Trump takes another shot at dismantling state AI regulation

P
Redakcja Pixelift0 views
Share
Trump takes another shot at dismantling state AI regulation

Foto: Digital photo collage of a judge with gavel whose hands has too many fingers.

# Translation The Trump administration presented a seven-point AI regulatory bill framework on Friday, which clearly indicates that the federal government should avoid most AI regulations except for child protection rules, and states cannot interfere with the "national strategy to achieve global AI dominance". The plan encourages Congress to introduce more rigorous safeguards for minors using AI services and measures aimed at preventing rising energy costs associated with AI infrastructure. It proposes a "wait and see" approach to the issue of training AI models on copyrighted materials without consent, while maintaining the long-standing Republican push to limit states' ability to enact their own AI regulations. The document supports provisions similar to the Take It Down Act from May 2025 banning non-consensual AI-generated intimate visual materials, and proposes age verification for AI platforms. However, the bill framework will remain only a proposal — it will take effect only if Congress passes it.

The Trump administration has just released a new plan for regulating artificial intelligence that clearly shows where AI policy in the United States is headed — and the direction is decidedly pro-business and anti-regulatory. The seven-point document, presented on Friday, contains a message so clear that it's hard to miss: the federal government should refrain from most AI regulations, while simultaneously prohibiting states from introducing their own rules that could hinder the "national strategy to achieve global dominance in AI". This is not a subtle diplomatic game — it's an open attack on state law and an attempt to concentrate all regulatory power in the hands of Washington.

The document is another episode in the Trump administration's campaign, now lasting nearly a year, for federal preemption of state AI regulations. However, this time the plan had to give way to certain pressures — particularly when it comes to child safety. Nearly forty state and territorial attorneys general expressed opposition to completely replacing local safety regulations, forcing the document's creators to make certain compromises. This is an interesting tension between the ideological desire for deregulation and the pragmatic necessity to respond to bipartisan concerns about protecting minors.

Let's take a closer look at what this plan really contains and what implications it has for the future of AI regulation worldwide — because decisions made in Washington always have broader consequences.

Compromise under pressure for child protection

When it comes to minor safety, the administration had to show it was listening. The plan contains a series of regulations designed to protect children in the digital world — from age verification through parental attestation, to restrictions on training AI models on children's data. The document also supports the Take It Down Act, which came into effect in May 2025 and prohibits non-consensual artificial "intimate visual depictions", requiring platforms to remove them quickly.

However, even in the section devoted to child protection, you can see the pragmatism typical of this plan. The administration is not proposing a complete ban on training models on children's data or targeting them with ads — it wants to merely "restrict" them. This distinction is important and suggests that for the document's creators, child safety is important, but should not completely block business innovation. The proposed age verification is particularly controversial from a privacy perspective, because it openly talks about "parental intermediaries" — which means that platforms will have to collect and store data allowing for age verification, which raises concerns about surveillance.

Interestingly, the document explicitly prohibits states from imposing their own child protection regulations — with one exception. States may still enforce "generally applicable regulations protecting children," such as prohibitions on materials depicting child sexual abuse, even if generated by AI. This concession was necessary for the document to pass bipartisan scrutiny, but also shows that even the Trump administration understands that some matters are too serious to completely federalize.

Deepfakes, voices and right of publicity — finally?

The plan contains a proposal that could have appeared many years ago — federal law protecting people from unauthorized distribution or commercial use of artificial replicas of their voice, image or other identifiable features. In the era of deepfakes, when AI-generated video looks increasingly realistic, and fake materials can spread across the world in seconds, such protection seems more than justified. The administration talks about a "federal system" — this could finally create unified right of publicity law at the national level.

However, the document is cautious about exceptions. The plan mentions "clear exceptions" for parody, reporting, satire and other forms protected by the First Amendment. This makes sense — we don't want right of publicity to become a tool for censoring artistic or political criticism. But it also means there will be long legal battles over exactly where the line lies between permissible parody and impermissible fraud or defamation.

In a global context, this is particularly important. Deepfakes know no borders — video generated in one country spreads immediately around the world. Federal right of publicity law in the United States could become a model for other countries, or alternatively, could create new regulatory inequality, where America has protection and other countries don't. The European Union is already moving in this direction through its AI Act, so this will be an interesting regulatory battleground in the coming years.

Copyright — let the courts fight it out

One of the most controversial issues in the AI industry is the question of whether training models on copyrighted material without permission constitutes copyright infringement. The Trump administration has a clear position — it doesn't think it does. But instead of trying to push this position through Congress, the plan suggests that Congress take no action at all on this matter and let the courts decide whether training on copyrighted material constitutes fair use.

This is a strategy that makes sense for the AI industry — letting courts decide means that for years there will be lawsuits, while training continues. But from the perspective of creators and publishers, it means their work can be used without permission and without compensation for many years until courts finally settle the matter. The document acknowledges that there are arguments on both sides, but clearly prefers the status quo, in which the AI industry has a free hand.

This approach is particularly interesting in an international context. The European Union and other jurisdictions may approach this differently, creating a situation where training models on copyrighted material would be legal in the USA but illegal in Europe. This could lead to fragmentation of global AI infrastructure, with the United States as home to "more aggressive" training, and Europe as a more restrictive market.

State preemption — the main battle

When it comes to the general approach to regulation, the administration's plan is consistent in its desire to eliminate state regulations. The document clearly states that Congress should "preempt state AI regulations that impose excessive burdens" and avoid "fifty divergent" standards for companies. The reasoning is that AI is "inherently an inter-state phenomenon" with implications for foreign policy and national security.

This is the central thesis of the entire plan — deregulation at the state level is meant to accelerate innovation. The document says outright: "States should not be authorized to regulate AI development". Such a position is radical, especially when compared to the history of regulation in the United States, where states have traditionally had significant power in matters of consumer protection, safety and privacy. California, for example, has long been a pioneer in data privacy regulations — first with CCPA, now with CPRA. The Trump administration's plan would be a threat to such an approach.

However, the plan contains certain exceptions. States can still enforce generally applicable regulations protecting children, and may also be authorized to enforce regulations regarding fraud and illegal activities. But when it comes to "regulating AI" itself — significant power would be shifted to Washington. This is a clear signal that the administration wants central, federal control over AI, not dispersed, state approaches.

Energy infrastructure and the problem of electricity bills

One of the more practical elements of the plan concerns the rising energy costs associated with AI infrastructure. Training large AI models requires enormous amounts of energy, and data centers built to support AI can significantly increase energy demand in local communities. The plan acknowledges this is a problem and suggests that Congress find ways to ensure that "ordinary residents will not experience increased energy costs as a result of the construction and operation of new AI data centers".

However, the plan simultaneously suggests that Congress streamline federal permits for the construction and operation of data centers, making it easier for AI companies to "develop or acquire on-site and behind-the-meter power generation". In other words — data centers should be built quickly, but costs should not be passed on to ordinary citizens. This is a balancing of interests — allow the AI industry to grow, but ensure it won't be at the cost of local communities.

This approach makes sense from a public policy perspective, but is also an acknowledgment of the growing political power of the AI industry. A few months ago we saw the first bipartisan efforts regarding rising electricity bills in communities with data centers — and now the administration is saying this is an issue that needs to be addressed. This shows that even a pro-business administration understands that if AI causes significant increases in energy costs for ordinary people, there will be political resistance.

No new regulator and sectoral approach

The plan answers a long-awaited question: should the United States create one federal body responsible for AI regulation, or should regulation be dispersed among existing sectoral agencies? The administration's answer is clear — don't create a new agency. Instead, AI regulation should be handled by existing bodies with sectoral competencies.

This approach has both advantages and disadvantages. On one hand, existing agencies such as the FDA, FTC or SEC already have experience regulating their sectors and can quickly adapt existing frameworks to AI. On the other hand, it means fragmentation — AI in medicine will be regulated by the FDA, AI in finance by the SEC, and AI in general business by the FTC. This can lead to inconsistencies and regulatory gaps.

The plan also talks about supporting "the development and implementation of sector-specific AI applications" — which means each sector will have its own guidelines and standards. This is a pragmatic approach, but also means there will be no unified position on AI safety or transparency at the level of the entire economy.

Freedom of speech and the Trump administration paradox

A particularly interesting section of the plan is the one devoted to freedom of speech and the First Amendment. The document states that the government "must defend freedom of speech and First Amendment protections, while preventing the use of AI systems to silence or censor lawful political speech or dissent". The plan goes further, saying that Congress should explicitly prohibit the government from "coercing" AI providers "to prohibit, coerce or alter content based on partisan or ideological agendas".

This is fascinating, given the Trump administration's history on AI and freedom of speech. President Trump signed an executive order prohibiting government agencies from using AI models that "contained" topics such as systemic racism. Recently, he also ordered all agencies to blacklist Anthropic — an AI company — for imposing restrictions on military use of its models, which Anthropic claims violates its First Amendment rights.

So the document, which talks about protecting freedom of speech and prohibiting government coercion of AI providers to censor, comes from an administration that actively coerces AI providers to comply with its ideological preferences. This is a classic example of political hypocrisy — when one hand of government writes about protecting freedom of speech, the other hand violates it. The plan also contains a provision saying that if government agencies censor speech on AI platforms, Americans should have the opportunity to "seek redress" — but this would be difficult to enforce against the government itself.

Accelerating AI development as the real goal

The entire plan can be summed up in one statement contained in the document: "The United States must lead the world in AI by removing barriers to innovation and accelerating the deployment of AI applications across sectors". This is the real goal — not consumer protection, not safety, but acceleration. The plan even includes a proposal for Congress to make federal datasets available to AI providers and academics in "AI-ready formats" for training models.

This reveals the real logic behind this plan. The administration believes that the United States could lose global dominance in AI if it is too restrictive in regulation. China, the European Union and other countries are developing their own AI models, and America must be faster. Thus, any regulation — whether state or federal — is seen as an obstacle to overcome.

This approach has a certain logic, but also ignores real social costs. If AI is developed without proper safeguards, consumers can be deceived by deepfakes, children can be exposed to manipulative content, and workers can find themselves jobless due to automation. The plan contains several provisions regarding child safety, but relative to the overall desire for deregulation, they are marginal.

Ultimately, the Trump administration's plan is a clear signal: the United States is choosing acceleration over protection. This may be a strategy that will benefit the AI industry and potentially the US as a whole in the global technology race. But there will also be victims — workers, consumers, children, and communities struggling with rising energy costs. The plan contains a few provisions to protect them, but they are insufficient relative to the scale of deregulatory ambitions. This is a choice the administration is making consciously, and it will have long-term consequences for AI regulation not only in the United States, but around the world.

Source: The Verge AI
Share

Comments

Loading...