Trump’s AI framework targets state laws, shifts child safety burden to parents

Foto: Getty Images
The Trump administration has unveiled a federal legislative plan aimed at centralizing AI regulations in the United States. The new legal framework will preempt state laws, eliminating the current patchwork of local regulations designed to protect users from artificial intelligence-related threats. According to the White House, a unified federal policy is necessary to maintain U.S. competitiveness in the global technology race. However, the plan shifts significant responsibility for child safety from regulators to parents, raising controversy among consumer rights advocates. The move represents a change in direction — states such as California have previously pursued more restrictive regulatory measures. Centralization could accelerate innovation in the AI sector, but risks weakening user protections, particularly regarding child safety and privacy. For global technology companies, this represents a potential opportunity to simplify compliance, though questions remain about whether a model based on parental responsibility will be sufficient in the face of growing algorithmic threats.
The Trump administration has just proposed something that could fundamentally change the landscape of artificial intelligence regulation in the United States — and thereby affect the entire world. The legislative framework presented on Friday puts emphasis on federal dominance, directly challenging states' right to independently regulate AI technology. This is not a typical technocratic move. This is a political battle over who controls the future of one of the most important technologies of our time.
The proposal strikes at the heart of growing tension between two visions of the future. On one side, we have states — from California to New York — which in recent years have aggressively introduced their own regulations concerning AI, data protection, and child safety. On the other side, the federal administration, which claims that this "mosaic" system of regulation undermines American innovation and the country's ability to compete with China in the race for technological dominance. Trump's framework is clear: one set of rules for all of America, written with the idea that technology can develop quickly and without obstacles.
But there is yet another dimension that attracts the attention of anyone observing this sector. The proposal not only centralizes power — it also shifts responsibility for child safety from the shoulders of tech companies and regulators directly onto parents. This is a paradigm shift that has profound implications for what the AI ecosystem will look like in the future.
Read also
Federal preemption as a deregulation tool
A key element of the proposed strategy is the concept of federal preemption — that is, replacing state regulations with a unified federal system. The White House argues that "this framework can succeed only if applied uniformly across the United States." This sounds reasonable from a regulatory consistency perspective, but the devil is in the details.
The reality is that the last five years have brought a wave of innovative state AI regulations. California led this process with bill SB 1047, which was ultimately vetoed by Governor Newsom, but which demonstrated states' ambitions regarding AI oversight. New York introduced requirements for algorithmic audits for decision-making systems. Colorado and other states attempted to establish privacy and transparency standards. This was real, organic regulation emerging from the needs of local communities.
Trump's proposal essentially says: stop. There will be no patchwork. There will be one set of rules, written in Washington, based on a "light touch" and supporting innovation. This sounds like a victory for the tech industry — and indeed, many sector representatives have already welcomed it. But for advocates of consumer regulation and data protection, this is a step backward by years. States would lose the right to establish higher safety standards for their residents.
Innovation as the holy grail of regulation
The innovation argument appears in Trump's framework multiple times and is the heart of his approach. The administration claims that "a patchwork of conflicting state laws would undermine American innovation and our ability to lead in the global AI race." This is a geopolitical argument disguised as regulatory pragmatism.
The argument is familiar: if each state has different rules, tech companies will have to adapt their products to each jurisdiction, which increases costs and slows implementation. Better to have one set of federal rules, even if less rigorous, than chaos of state regulations. This is logic we have heard many times — and sometimes it makes sense. But here a problem emerges: "innovation" is being used as an argument for deregulation, not for smart regulation.
History shows that regulation and innovation do not necessarily exclude each other. The EU's GDPR did not kill innovation in Europe — instead, it caused companies to think more strategically about data protection. But here we have a different scenario: federal preemption that not only unifies rules but weakens them. This is not smart regulation — this is deregulation disguised as federalism.
Shifting responsibility to parents: who really protects children?
Perhaps the most controversial aspect of the framework is the shift of responsibility for child safety from tech companies and regulators to parents. This is a fundamental change in how we think about responsibility in the digital world. Instead of requiring companies to build safe products and protect children through design, the framework places parents as the last line of defense.
The problem with this approach is obvious to anyone who has children or remembers being a child. Children are naturally attracted to technology, and their ability to understand risk is limited — this is biology, not a personal fault of parents. When algorithms are designed by engineers with access to billions of dollars in research on user engagement, and parents have access to instruction manuals and hope they will be vigilant, this is not a fair fight.
History shows what happens when we shift responsibility to parents. In the 1990s, the cable television industry argued that parents should use parental controls — and many of them were never properly configured. The 2000s internet told us about "digital citizenship" and "digital literacy" — while children were bombarded with content designed to be addictive. Now history is repeating itself, but the stakes are AI and systems that are even more advanced at manipulating human emotions and behavior.
Global competition as a pretext for weakening regulation
The argument of competition with China appears in the framework multiple times. Indeed, China is developing AI quickly and aggressively. Indeed, America wants to remain a technology leader. But is weak AI regulation really the path to that goal?
The history of technology suggests something different. America did not become the internet leader because it had the weakest regulations — it became a leader because it had a diversified ecosystem of innovation, talent from around the world, and the ability to implement quickly. The European GDPR did not kill the European tech sector — European companies simply had to be more creative in how they built business models. Indeed, some European AI startups now compete with Americans, partly because they had to think about safety and ethics from the beginning.
The argument of global competition is sometimes justified, but here it is being used too broadly. There is no evidence that state AI regulations in America hamper innovation more than regulations in Europe or Japan. What exists is pressure from the tech industry to avoid regulation it fears — and the argument of global competition is a convenient way to justify that pressure.
Light-touch regulation: what does it mean in practice?
Trump's framework is based on the concept of "light touch" — that is, an approach that favors industry self-regulation over regulatory mandates. This sounds like a reasonable compromise, but in practice it means that companies will have a lot of freedom in how they build and deploy AI systems, provided they meet a few broad, general guidelines.
History shows what happens with self-regulation in the tech industry. Social media companies promised they would self-regulate on hate speech and misinformation — and a decade later we still have problems with these issues. The financial industry promised itself it would be safe — and then came the 2008 financial crisis. Self-regulation works well when a company's interest is aligned with the public interest. But in AI, where algorithms are designed to maximize engagement and revenue, and user privacy or child safety are costs, that alignment does not exist.
Practically, "light touch" means that companies will have to meet minimum requirements — perhaps some form of transparency about how their models work, or basic safety guarantees. But there will be no mandates for independent audits, no requirements to mitigate bias, no obligation to inform users they are talking to AI. This will be de facto deregulation, wrapped in the language of pragmatism.
State law will not disappear — conflict will continue
One of the most interesting aspects of this proposal is the fact that state law will not simply disappear because Washington wants it to. Federal preemption is a proposed policy, not a fact. States will defend themselves — and they have tools to do so.
California, New York, and other states have already shown willingness to fight federal regulations they perceive as too weak. They can do this through state consumer protection law, fraud law, and even constitutional law. They can also resist politically — state governors can be powerful voices in Congress on technology issues. Indeed, a scenario in which Washington and states battle over control of AI regulation is very likely in the coming years.
This means that Trump's proposal may not be the end of state AI regulations — it may rather be the beginning of a more fierce battle over what the appropriate level of regulation should be. The irony is that this battle may ultimately lead to a more complex, not less complex, regulatory system.
Who really wins with this framework?
If you look beyond the rhetoric of innovation and global competition, it is clear who wins with this framework: large tech companies that already dominate the AI industry. OpenAI, Google, Meta and other giants have the resources to deal with even a complex regulatory system. It is startups and smaller companies that would really be harmed by a patchwork of state regulations — but these are also companies that can afford to hire compliance teams and lawyers to navigate complex regulations.
For consumers and children, the situation is less clear. On one hand, unified federal regulation could be better than chaos of state regulations — if it were well written. On the other hand, if that federal regulation is weak, it is worse than what we have now, where states can set higher standards. Indeed, Trump's proposal seems to be a case where unification means lowering standards, not raising them.
This is the stake: will America have AI regulation that protects innovation and competition, or regulation that protects consumers and children? Trump's proposal clearly opts for the first option. But history shows that in the long term, societies that ignore consumer protection and child safety pay a price — even if they don't see it right away.








