A fight is underway in Congress over an amendment to the “big beautiful” budget reconciliation bill that would put a 10-year moratorium on state laws governing certain uses of artificial intelligence. The amendment, proposed by Republicans and opposed by Democrats on the House Energy and Commerce Committee, is broad and concerning to multiple stakeholders, including 36 State Attorneys General who signed a letter addressed to the House. It states, “The impact of such a broad moratorium would be sweeping and wholly destructive of reasonable state efforts to prevent known harms associated with AI.”
The language, which passed out of committee last week, states:
(c) MORATORIUM.—
(1) IN GENERAL.—Except as provided in paragraph (2), no State or political subdivision thereof may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the 10-year period beginning on the date of the enactment of this Act.
According to Tech Policy Press, the idea for a legislative “pause” to allow AI development room to “innovate” began with a 2024 blog post by R Street’s Adam Thierer. “With over 700 federal and state AI legislative proposals threatening to drown AI innovators in a tsunami of red tape, Congress should consider adopting a ‘learning period’ moratorium that would limit burdensome new federal AI mandates as well as the looming patchwork of inconsistent state and local laws,” Thierer wrote.
Putting a pin in my cynicism about “learning periods” granted to Big Tech, the fact is that on cyber policy, Republicans and Democrats have been united (at least in multiple hearings) on the theme that tech platforms have already acted irresponsibly with their unregulated market when it comes to mitigating child suicide, drug trafficking, non-consensual pornography, threats to lawful commerce, and other matters. Further, several states have already passed, or are proposing, laws aimed at specific harms, all of which are either directly or indirectly facilitated with AI technology.
For example, the Texas Senate recently and unanimously passed a bill designed to “Stop AI Generated Child Pornography,” and it is tough to imagine why Texas Representatives or Senators would pass legislation that would preempt their own state’s right and rationale to mitigate this egregious crime. Some may argue that the moratorium will not preempt the Texas law, or similar laws, but I think it is a safe bet that such laws would be ripe for a preemption challenge.
Perhaps no party will litigate to defend child pornography, but what about the rights of musical performers? In March of last year, music-rich Tennessee passed the ELVIS Act to prohibit the AI replication of voices without permission of the individual. The act further prohibits making available an algorithm, software, tool, et al. with the primary purpose or function of producing an unauthorized “likeness.” Given the interests of AI developers in various uses of likeness replication, Tennessee’s ELVIS Act would seem ideal for a preemption challenge, if Congress were to pass the moratorium. Indeed Tennessee Senator Blackburn, recently pushed back on the moratorium proposal, citing the ELVIS Act as a “first generation of the NO FAKES” bill that was reintroduced in Congress in April.
In California, the State Assembly Judiciary Committee recently passed AB-412, which would require AI developers to (upon request) provide information as to whether a rightsholder’s protected and registered works were used in model training. This provision, essentially requiring that a product maker take responsibility for materials in its supply chain, would almost certainly fail a preemption challenge under the moratorium.
Ten Years is Forever in Tech Time
Returning to the cynicism I set aside, lawmakers on both sides of the aisle already know what 10+ years of letting Big Tech do what it wants looks like. Americans have already “learned” that lesson, and I have lost count of how many times Republicans and Democrats have disparaged the unconditioned immunity of Section 230 and the industry’s callous disregard for the various harms it causes.
Yes, we are going to continue to debate and fight like hell over the bugaboo of misinformation, but in the meantime, Republicans cannot reasonably want to oppose state laws designed to protect their citizens from direct physical, emotional, and/or economic harm. We’ve been there and done that to death. Congress should not be persuaded to let Big Tech play in the lab for another decade just to see what happens.
Below, is a list of laws enacted or proposed in several states, and Congress should take particular note of legislation designed to protect both children and adults from sexual abuse with generative AI.
Indiana
In 2024, Indiana enacted two laws addressing the proliferation of deepfake media for political campaigns and nonconsensual sexual images. The sexual deepfake law (IN HB 1047) defines certain images created by AI to constitute an “intimate image” for purposes of the crime of distributing an intimate image.
Kansas
Kansas law makes it illegal to possess, create, or distribute child sexual abuse material generated by artificial intelligence. This includes images or videos of minors that are digitally manipulated to appear sexually explicit.
State Devices and Networks: Kansas prohibits the use of certain AI platforms, especially those deemed “of concern” due to potential risks related to data privacy, misinformation, and national security, on state-owned devices and networks.
Specific Platforms Targeted: House Bill 2313, enacted in April 2025, specifically bans platforms like DeepSeek (a Chinese AI model) and others controlled by countries considered foreign adversaries.
Kansas requires contractors providing services to the state that utilize AI to disclose this in their contracts, particularly when handling state-owned data.
Mississippi
Criminalization of AI-Generated CSAM: Mississippi law criminalizes the creation, distribution, and possession of AI-generated or computer-edited child sexual abuse material. This was achieved through the enactment of MS HB 1126 in 2024.
Mississippi is in the early stages of addressing the implications of AI. The state has formed a task force to guide the responsible implementation of AI in education and has issued an executive order to promote collaboration among state agencies.
Missouri
Proposed
Senate Bill 509: Mandates disclaimers on political advertisements created using AI.
House Bill 673: Proposes requirements for AI-generated political ads.
Montana
Legislative Initiatives: Lawmakers are working on a framework to regulate AI, focusing on a “narrow, detailed” approach.
Nebraska
The Artificial Intelligence Consumer Protection Act (LB642): This act aims to protect consumers from potential harm caused by AI systems, particularly high-risk systems, by requiring developers to take reasonable steps to prevent algorithmic discrimination.
Bills Focused on Protecting Children Online: These include the Parental Rights in Social Media Act (LB 383) which requires parental consent for minors’ social media accounts, and measures restricting student phone use during school hours (LB 140). The Age-Appropriate Online Design Code Act (LB 504) mandates features that prevent compulsive usage and protect against harm on social media and online services.
Additional AI-related Legislation: LB 172 prohibits the creation and distribution of AI-generated child pornography.
North Carolina
Sexual Deepfake Law (NC HB 591): Enacted in 2024, this law addresses the creation and use of AI-generated sexual deepfakes. It creates the crime of sextortion, specifically including the use of AI-generated sexual images to coerce or compel a victim. The law also updates criminal provisions related to the sexual exploitation of minors to encompass computer-generated depictions.
House Bill 1036: Establishes the North Carolina Artificial Intelligence Task Force to study AI’s impact and recommend policies.
Ohio
Proposed
Senate Bill 217: Requires AI-generated content to include watermarks and criminalizes the creation of simulated obscene material using AI. Criminalizes the creation, possession, and distribution of AI-generated depictions of minors in a sexual or obscene manner.
AI in Education Strategy: Launched to prepare educators and students for AI integration in education.
South Dakota
In 2024, South Dakota enacted legislation (SD SB 79) that addresses the use of AI in child pornography, specifically focusing on “deepfakes” and computer-generated child sexual abuse material (CSAM)
On March 25, 2025, South Dakota Gov. Larry Rhoden signed a bill that regulates the use of AI deepfakes in elections. The bill targets the use of intentionally harmful and unlabeled deepfakes of South Dakota politicians within 90 days of an election.
Tennessee
Tennessee law criminalizes the creation and distribution of AI-generated child sexual abuse material (CSAM), with penalties ranging from Class B to Class C felonies, depending on the offense. The state also targets technology and software used to generate this content.
Tennessee law requires universities and local school boards to develop and implement policies regarding the use of AI technology by students, faculty, and staff for instructional and assignment purposes.
The ELVIS Act: Officially known as the Ensuring Likeness Voice and Image Security Act, this landmark legislation, enacted in 2024, protects musicians and other performers from the unauthorized use of their voice and likeness by AI.
Texas
Effective July 1, 2024, the Texas Data Privacy and Security Act (TDPSA) applies to large companies that do business in Texas or sell, collect, or process personal data.
Texas Responsible AI Governance Act (TRAIGA): Proposed legislation aiming to ban “unacceptable risk” AI systems, such as those manipulating human behavior or creating unauthorized deepfakes.
Effective Sept. 1, 2024: The SCOPE Act (Securing Children Online through Parental Empowerment), requires digital service providers, such as companies that own websites, apps, and software, to protect minor children (under 18) from harmful content and data collection practices.
West Virginia
House Bill 5690: Establishes the West Virginia Task Force on Artificial Intelligence to study AI’s impact and recommend policies.
Wyoming
Foundation Model Registration (Draft Bill 24LSO-0239): Proposes requiring providers of AI foundation models to register with the state.
Education Guidance: The Wyoming Department of Education released guidance for school districts on developing AI use policies.
Image by Wrightstudio
Leave a Reply