Americans Won’t Thank Washington for Protecting Big Tech in the NDAA

Unregulated artificial intelligence is a new pandemic. Parents know it. Consumers know it. Educators know it. State lawmakers know it. And Members of Congress know it. Blame who you will for COVID-19 or the price of eggs, but there is no question that the harms of AI, like surveillance pricing for those eggs, are being cooked up in the labs of Silicon Valley. And all the tech billionaires want for Christmas is for the federal government to stop state lawmakers from protecting Americans.

Federal unwillingness to regulate Big Tech is a pox on both parties dating back to at least the Obama administration, and so, in recent years, state lawmakers have moved to protect American citizens from tech oligarchs, who have made it clear through their actions that they bear no moral obligation to make products safe or secure or in compliance with healthy social or democratic principles. That such a virulent agenda might now be snuck into the national dense bill of the United States is an absurdity beyond comprehension.

Having failed to pass a 10-year federal moratorium barring state regulation of artificial intelligence in the “One Big Beautiful Bill”—the Senate rejected the provision 99 to 1—Big Tech billionaires now want Congress to attach the same rule to the must-pass National Defense Authorization Act (NDAA). Among the many hazards inherent to this anti-democratic, anti-American agenda, is that it would in fact undermine national security.

National defense of the U.S. is a holistic calculus that goes beyond soldiers in uniform, weapons on hand, and the PT standards that seem to occupy much of the current secretary’s attention. It is a geographic, economic, intellectual, cultural, technological, and political consideration in which the greatest strength is also the greatest weakness—that it can only truly be eroded from within. Among the core assets of the U.S. is that “States serve as laboratories of democracy,” to quote a November 24 letter signed by dozens of state lawmakers of both parties urging rejection of the AI moratorium…

As state lawmakers and policymakers, we hear regularly from constituents about rising online harms and the growing influence of AI on their lives. In an increasingly fraught digital environment, young people face new risks online, seniors are increasingly targeted by AI-enabled scams, and workers and creators are encountering novel challenges in an AI-driven economy. In the years ahead, AI’s impact will require lawmakers to consider consequential public policy questions, making it essential that states retain the authority to act.

Just as cable or rope strength is derived by combining small strands of material, U.S. policy forged in state “laboratories” is, on the whole, a strength of the American system—especially when those efforts are designed to mitigate specific, identifiable harms to constituents. And those harms sound in the nearly 400 state bills being tracked by Reset Tech, an independent advocacy organization. From children suffering adverse and deadly health effects to unprecedented data privacy abuses, state lawmakers are working overtime to do the job Congress has yet to do despite years of strident hearings promising to hold Big Tech accountable for its culture of negligence.

For now, the least Congress can do is stay out of the way, and few if any of their constituents will complain that they declined to give Big Tech another free pass—let alone ten more years—to do whatever they want with our data, with jobs, with child safety, with national security. No industry enjoys such latitude, and no ordinary citizen of any political party benefits from the technological pandemic that will surely run amok at the speed of AI.

Americans across the political spectrum have seen through Big Tech’s bullshit, and they’re not buying it anymore. Generic promises of “innovation” don’t mean anything to parents trying to navigate the dangers of social media and chat products, or to seniors increasingly vulnerable to scams, or even to business enterprises trying to balance the opportunities of AI with the novel security risks it presents. Nobody is going to thank Congress or the White House for preventing state legislators working to protect children, consumers, and local businesses. The idea of attaching such a provision to the NDAA is as politically naive as it is bad for the country.


Photo courtesy of Eric Feinberg, Coalition for a Safer Web.

Is Congress Prepared to Scuttle Good State Laws for AI Developers

state laws

A fight is underway in Congress over an amendment to the “big beautiful” budget reconciliation bill that would put a 10-year moratorium on state laws governing certain uses of artificial intelligence. The amendment, proposed by Republicans and opposed by Democrats on the House Energy and Commerce Committee, is broad and concerning to multiple stakeholders, including 36 State Attorneys General who signed a letter addressed to the House. It states, “The impact of such a broad moratorium would be sweeping and wholly destructive of reasonable state efforts to prevent known harms associated with AI.”

The language, which passed out of committee last week, states:

(c) MORATORIUM.—

(1) IN GENERAL.—Except as provided in paragraph (2), no State or political subdivision thereof may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the 10-year period beginning on the date of the enactment of this Act.

According to Tech Policy Press, the idea for a legislative “pause” to allow AI development room to “innovate” began with a 2024 blog post by R Street’s Adam Thierer. “With over 700 federal and state AI legislative proposals threatening to drown AI innovators in a tsunami of red tape, Congress should consider adopting a ‘learning period’ moratorium that would limit burdensome new federal AI mandates as well as the looming patchwork of inconsistent state and local laws,” Thierer wrote.

Putting a pin in my cynicism about “learning periods” granted to Big Tech, the fact is that on cyber policy, Republicans and Democrats have been united (at least in multiple hearings) on the theme that tech platforms have already acted irresponsibly with their unregulated market when it comes to mitigating child suicide, drug trafficking, non-consensual pornography, threats to lawful commerce, and other matters. Further, several states have already passed, or are proposing, laws aimed at specific harms, all of which are either directly or indirectly facilitated with AI technology.

For example, the Texas Senate recently and unanimously passed a bill designed to “Stop AI Generated Child Pornography,” and it is tough to imagine why Texas Representatives or Senators would pass legislation that would preempt their own state’s right and rationale to mitigate this egregious crime. Some may argue that the moratorium will not preempt the Texas law, or similar laws, but I think it is a safe bet that such laws would be ripe for a preemption challenge.

Perhaps no party will litigate to defend child pornography, but what about the rights of musical performers? In March of last year, music-rich Tennessee passed the ELVIS Act to prohibit the AI replication of voices without permission of the individual. The act further prohibits making available an algorithm, software, tool, et al. with the primary purpose or function of producing an unauthorized “likeness.” Given the interests of AI developers in various uses of likeness replication, Tennessee’s ELVIS Act would seem ideal for a preemption challenge, if Congress were to pass the moratorium. Indeed Tennessee Senator Blackburn, recently pushed back on the moratorium proposal, citing the ELVIS Act as a “first generation of the NO FAKES” bill that was reintroduced in Congress in April.  

In California, the State Assembly Judiciary Committee recently passed AB-412, which would require AI developers to (upon request) provide information as to whether a rightsholder’s protected and registered works were used in model training. This provision, essentially requiring that a product maker take responsibility for materials in its supply chain, would almost certainly fail a preemption challenge under the moratorium.

Ten Years is Forever in Tech Time

Returning to the cynicism I set aside, lawmakers on both sides of the aisle already know what 10+ years of letting Big Tech do what it wants looks like. Americans have already “learned” that lesson, and I have lost count of how many times Republicans and Democrats have disparaged the unconditioned immunity of Section 230 and the industry’s callous disregard for the various harms it causes.

Yes, we are going to continue to debate and fight like hell over the bugaboo of misinformation, but in the meantime, Republicans cannot reasonably want to oppose state laws designed to protect their citizens from direct physical, emotional, and/or economic harm. We’ve been there and done that to death. Congress should not be persuaded to let Big Tech play in the lab for another decade just to see what happens.

Below, is a list of laws enacted or proposed in several states, and Congress should take particular note of legislation designed to protect both children and adults from sexual abuse with generative AI.


Image by Wrightstudio