In the fast-moving legislative sprint to enact President Donald Trump’s ambitious “Big Beautiful Bill,” a crucial but controversial element has sparked widespread unrest: the AI moratorium. Initially, this moratorium demanded a sweeping ten-year freeze on state-level AI regulations, championed by White House AI czar David Sacks. This provision aimed to create a uniform federal framework by preventing states from imposing their own AI rules. However, this approach quickly drew intense backlash. Critics ranging from a diverse coalition of 40 state attorneys general to staunchly conservative figures like Marjorie Taylor Greene decried the provision as dangerously overreaching and insufficiently protective of public interests.
The original 10-year moratorium was arguably too rigid and disconnected from the rapidly evolving AI landscape. It bore the risk of hobbling innovative, localized, and urgent protective measures during a decade in which AI’s societal impact is only expected to accelerate. It felt less like strategic foresight and more like an overprotection mechanism that put Big Tech’s commercial interests above public safety and accountability.
The Five-Year Compromise That Failed to Satisfy
In an attempt to quell opposition, Senators Marsha Blackburn and Ted Cruz unveiled a revised version of the moratorium that reduced the pause on state regulations from ten years to five. This iteration also incorporated several “carve-outs,” ostensibly exempting regulations related to child safety, privacy, unfair business practices, and rights of publicity—a nod to protections championed by Blackburn, whose home state Tennessee has aggressively fought AI deepfakes harming musicians’ likenesses.
Yet, even with these concessions, the five-year moratorium remained contentious. Blackburn herself wavered, initially opposing the moratorium, later supporting the revised version, then ultimately rejecting it again under pressure. This flip-flopping reflected the tension between appeasing Big Tech interests, which seek regulatory certainty, and responding to grassroots and state officials demanding stronger defenses against AI’s growing risks.
The moratorium’s exceptions are undermined by a critical caveat: state laws can only be allowed if they don’t impose an “undue or disproportionate burden” on AI systems or automated decision systems. This vague “burden” clause effectively serves as a backdoor for tech companies to challenge, and often derail, meaningful state regulations. By framing regulatory resistance as undue burden, the provision empowers AI developers to stymie protective laws under the guise of preserving innovation.
The Broad Impact on Protective Legislation
The scope of the moratorium and its caveats threaten to sweep away a broad array of regulations aimed at safeguarding vulnerable groups. Advocacy groups like Common Sense Media, led by Danny Weiss, condemn the provision as “extremely sweeping,” warning that it could hinder nearly every effort to regulate technology in the domain of child safety and digital wellbeing. The impact extends well beyond children—privacy frameworks, anti-exploitation laws, and other protections vital to public interest may be curtailed for years.
Moreover, this federal overreach is not without ideological opposition. It has drawn criticism from disparate quarters—from unions, which are concerned about workforce protections, to political figures like Steve Bannon, who believe the moratorium grants an unchecked window for tech elites to entrench their influence.
Why State Autonomy Matters in AI Regulation
At the heart of this debate lies a fundamental question: who should hold the reins in regulating AI—the federal government with sweeping, technology-friendly rules favoring industry, or the states, which often act more responsively to their residents’ specific harms and needs?
States have become laboratories of democracy, pioneering regulations on everything from online privacy to protecting individuals’ likenesses in the AI age. Tennessee’s efforts to protect musicians against deepfake AI are a case in point. Disallowing states’ ability to tailor rules risks leaving citizens vulnerable in a digital landscape where harms are unevenly distributed and rapidly changing.
The insistence on federal preemption before permitting states to proceed with these protections signals an overprioritization of industry comfort over human safety. It suggests that Big Tech’s capacity to leverage AI for profit under minimal oversight is a higher priority than the public’s right to safety and dignity.
The Path Forward Requires Courage, Not Compliance
Despite its purported intent to bring regulatory clarity, the AI moratorium in its current forms raises alarms about an erosion of progressive safeguards precisely when they are most needed. The frequent flip-flops by lawmakers like Blackburn underscore political uncertainty surrounding the balance between innovation and protection.
The heart of the matter is simple: AI’s potential for harm—be it exploitation, privacy violation, or manipulation—requires robust, timely legal response. A federal moratorium that shackles state initiatives risks allowing these harms to proliferate unchecked. Far from a “get-out-of-jail-free card” for Big Tech, what’s needed is proactive federal legislation that elevates, not replaces, strong state-level protections, while closing loopholes that allow AI companies to bypass accountability under vague burdensomeness claims.
Legislators must resist the temptation to bow to tech lobbyists’ pressure and instead champion bold policies that prioritize people over profit. AI regulation is too critical a frontier to be compromised by watered-down moratoriums offering false security while enabling exploitation. The future deserves laws built on courage and care—not convenience.