Not sure what to get the tech oligarch who has (literally) everything this holiday? Why not his very own Presidential Executive Order titled Ensuring a National Policy Framework for Artificial Intelligence? It’s the latest thing in political theater coming out of the Trump White House—a plan so detrimental in principle to the American public that almost no Members of Congress in either party wanted any piece of it when the plan was proposed as a federal moratorium on state AI regulations. Instead, it looks like David Sacks and Adam Thierer got what they want for Christmas this year.
“This EO reads like a policy paper drafted by Sacks and Thierer in a private room and slid across the Resolute Desk,” states creators’ rights advocate and attorney Chris Castle in a recent blog post. Perhaps the inevitable legal challenges will serve Big Tech’s intent to stonewall compliance with state regulations while they continue to move fast and break more things, or perhaps Congress will act to protect seniors, children, creators, business operators, and pretty much every citizen who may be harmed by unregulated AI.
The stated rationale of the EO proclaims an intent to establish a federal, unified AI policy so that American tech companies can develop their products unburdened by a thicket of various state regulations. Of course, the first problem with both the politics and the operation of the order is that there is no federal AI policy. Thus, the provision that, for instance, the DOJ will establish an AI Litigation Task Force to go after state laws begs the question as to what it can possibly litigate when there are no federal statutes on which to base a complaint. More broadly, the order is ripe for constitutional challenges—Castle discusses five implicated violations—and so, the EO presents yet another opportunity for chaos and lawsuits.
Meanwhile, most states have passed or proposed AI related laws designed to protect citizens from a range of abuses, including scams aimed at seniors and a parade of harmful effects on children and teens. The EO claims to want to address these and other matters, stating…
My Administration must act with the Congress to ensure that there is a minimally burdensome national standard — not 50 discordant State ones. The resulting framework must forbid State laws that conflict with the policy set forth in this order. That framework should also ensure that children are protected, censorship is prevented, copyrights are respected, and communities are safeguarded. A carefully crafted national framework can ensure that the United States wins the AI race, as we must.
But if past is prologue, the tech industry is counting on Congress to do no such thing as adopt a national framework that would resolve any of those concerns, least of all by establishing the one measure the industry has thus far avoided—meaningful liability. I don’t know how many headlines I’ve read, including quotes by lawmakers in both parties, articulating some variation on the theme that “we should not make the mistake with AI that we made with social media.” But that is precisely what the U.S. is doing, and with far more perilous consequences.
In the late 1990s, Congress decided to let the internet service providers operate with little or no oversight because the industry was in its infancy and there was no appetite for “stifling innovation.” Fast forward to the present, and social media platforms are known to be so toxic for young people that Australia is now experimenting with an attempt at an outright ban for users under the age of sixteen.
Beginning in 2017, when journalists and citizens worldwide finally recognized that Silicon Valley is not trustworthy and that social media is not an engine of democracy, Mark Zuckerberg cited AI as the generic solution for myriad ill effects caused by Meta platforms. He was lying, of course. But does anyone of any political persuasion currently believe that AI is not already exacerbating and exceeding the worst aspects of digital life? Because state lawmakers and attorneys general seem clear-eyed on the matter.
A coalition of 42 state AGs led by New Jersey’s Matthew Platkin is demanding that tech companies put an end to harmful chatbot products. In a December 10th announcement on General Platkin’s site, he declares, “As the chief law enforcement officers in our states, we must take action to protect the public from sycophantic and delusional behavior by software that risks breaking a host of criminal and civil laws.”
The premise that America needs an unregulated AI landscape in order to “win” against adversarial China is magical thinking. Because tech giants have no integrity when it comes to self-regulation, it is clear to lawmakers that only the imposition of effective liability will motivate the industry to mitigate unlawful or dangerous design flaws and/or uses of their products. Liability requires regulatory frameworks, and so, the states have done what Congress has thus far failed to do in order to protect American citizens.
Meanwhile, the rhetoric in the EO is consistent with the PR of the industry that insists the public focus on the technology rather than the ethically challenged people behind the technology. Adam Thierer, in a recent post, takes shots at the humanist v. AI perspective, arguing that humanists hypocritically reveal a lack of faith in humans. Of course, he’s right, just not the way he intends, because damn straight millions of us have zero faith in the humans making all the decisions about the development of AI.
We don’t trust the makers of dishwashers to operate without regulations. Why the hell would we give carte blanche to the most arrogant, power hungry, anti-democratic, and greedy boys on Earth playing with a technology that may have existential consequences? That’s not a recipe for winning anything, and we shall she whether the president’s holiday gift to Big Tech leads to anything other than needless litigation when what Americans need are proper safeguards.








Leave a Reply