Big Tech Gets AI Executive Order for Christmas

executive order

Not sure what to get the tech oligarch who has (literally) everything this holiday? Why not his very own Presidential Executive Order titled Ensuring a National Policy Framework for Artificial Intelligence? It’s the latest thing in political theater coming out of the Trump White House—a plan so detrimental in principle to the American public that almost no Members of Congress in either party wanted any piece of it when the plan was proposed as a federal moratorium on state AI regulations. Instead, it looks like David Sacks and Adam Thierer got what they want for Christmas this year.

“This EO reads like a policy paper drafted by Sacks and Thierer in a private room and slid across the Resolute Desk,” states creators’ rights advocate and attorney Chris Castle in a recent blog post. Perhaps the inevitable legal challenges will serve Big Tech’s intent to stonewall compliance with state regulations while they continue to move fast and break more things, or perhaps Congress will act to protect seniors, children, creators, business operators, and pretty much every citizen who may be harmed by unregulated AI.

The stated rationale of the EO proclaims an intent to establish a federal, unified AI policy so that American tech companies can develop their products unburdened by a thicket of various state regulations. Of course, the first problem with both the politics and the operation of the order is that there is no federal AI policy. Thus, the provision that, for instance, the DOJ will establish an AI Litigation Task Force to go after state laws begs the question as to what it can possibly litigate when there are no federal statutes on which to base a complaint. More broadly, the order is ripe for constitutional challenges—Castle discusses five implicated violations—and so, the EO presents yet another opportunity for chaos and lawsuits.

Meanwhile, most states have passed or proposed AI related laws designed to protect citizens from a range of abuses, including scams aimed at seniors and a parade of harmful effects on children and teens. The EO claims to want to address these and other matters, stating…

My Administration must act with the Congress to ensure that there is a minimally burdensome national standard — not 50 discordant State ones.  The resulting framework must forbid State laws that conflict with the policy set forth in this order.  That framework should also ensure that children are protected, censorship is prevented, copyrights are respected, and communities are safeguarded.  A carefully crafted national framework can ensure that the United States wins the AI race, as we must.

But if past is prologue, the tech industry is counting on Congress to do no such thing as adopt a national framework that would resolve any of those concerns, least of all by establishing the one measure the industry has thus far avoided—meaningful liability. I don’t know how many headlines I’ve read, including quotes by lawmakers in both parties, articulating some variation on the theme that “we should not make the mistake with AI that we made with social media.” But that is precisely what the U.S. is doing, and with far more perilous consequences.

In the late 1990s, Congress decided to let the internet service providers operate with little or no oversight because the industry was in its infancy and there was no appetite for “stifling innovation.” Fast forward to the present, and social media platforms are known to be so toxic for young people that Australia is now experimenting with an attempt at an outright ban for users under the age of sixteen.

Beginning in 2017, when journalists and citizens worldwide finally recognized that Silicon Valley is not trustworthy and that social media is not an engine of democracy, Mark Zuckerberg cited AI as the generic solution for myriad ill effects caused by Meta platforms. He was lying, of course. But does anyone of any political persuasion currently believe that AI is not already exacerbating and exceeding the worst aspects of digital life? Because state lawmakers and attorneys general seem clear-eyed on the matter.

A coalition of 42 state AGs led by New Jersey’s Matthew Platkin is demanding that tech companies put an end to harmful chatbot products. In a December 10th announcement on General Platkin’s site, he declares, “As the chief law enforcement officers in our states, we must take action to protect the public from sycophantic and delusional behavior by software that risks breaking a host of criminal and civil laws.”

The premise that America needs an unregulated AI landscape in order to “win” against adversarial China is magical thinking. Because tech giants have no integrity when it comes to self-regulation, it is clear to lawmakers that only the imposition of effective liability will motivate the industry to mitigate unlawful or dangerous design flaws and/or uses of their products. Liability requires regulatory frameworks, and so, the states have done what Congress has thus far failed to do in order to protect American citizens.

Meanwhile, the rhetoric in the EO is consistent with the PR of the industry that insists the public focus on the technology rather than the ethically challenged people behind the technology. Adam Thierer, in a recent post, takes shots at the humanist v. AI perspective, arguing that humanists hypocritically reveal a lack of faith in humans. Of course, he’s right, just not the way he intends, because damn straight millions of us have zero faith in the humans making all the decisions about the development of AI.

We don’t trust the makers of dishwashers to operate without regulations. Why the hell would we give carte blanche to the most arrogant, power hungry, anti-democratic, and greedy boys on Earth playing with a technology that may have existential consequences? That’s not a recipe for winning anything, and we shall she whether the president’s holiday gift to Big Tech leads to anything other than needless litigation when what Americans need are proper safeguards.

 

Americans Won’t Thank Washington for Protecting Big Tech in the NDAA

Unregulated artificial intelligence is a new pandemic. Parents know it. Consumers know it. Educators know it. State lawmakers know it. And Members of Congress know it. Blame who you will for COVID-19 or the price of eggs, but there is no question that the harms of AI, like surveillance pricing for those eggs, are being cooked up in the labs of Silicon Valley. And all the tech billionaires want for Christmas is for the federal government to stop state lawmakers from protecting Americans.

Federal unwillingness to regulate Big Tech is a pox on both parties dating back to at least the Obama administration, and so, in recent years, state lawmakers have moved to protect American citizens from tech oligarchs, who have made it clear through their actions that they bear no moral obligation to make products safe or secure or in compliance with healthy social or democratic principles. That such a virulent agenda might now be snuck into the national dense bill of the United States is an absurdity beyond comprehension.

Having failed to pass a 10-year federal moratorium barring state regulation of artificial intelligence in the “One Big Beautiful Bill”—the Senate rejected the provision 99 to 1—Big Tech billionaires now want Congress to attach the same rule to the must-pass National Defense Authorization Act (NDAA). Among the many hazards inherent to this anti-democratic, anti-American agenda, is that it would in fact undermine national security.

National defense of the U.S. is a holistic calculus that goes beyond soldiers in uniform, weapons on hand, and the PT standards that seem to occupy much of the current secretary’s attention. It is a geographic, economic, intellectual, cultural, technological, and political consideration in which the greatest strength is also the greatest weakness—that it can only truly be eroded from within. Among the core assets of the U.S. is that “States serve as laboratories of democracy,” to quote a November 24 letter signed by dozens of state lawmakers of both parties urging rejection of the AI moratorium…

As state lawmakers and policymakers, we hear regularly from constituents about rising online harms and the growing influence of AI on their lives. In an increasingly fraught digital environment, young people face new risks online, seniors are increasingly targeted by AI-enabled scams, and workers and creators are encountering novel challenges in an AI-driven economy. In the years ahead, AI’s impact will require lawmakers to consider consequential public policy questions, making it essential that states retain the authority to act.

Just as cable or rope strength is derived by combining small strands of material, U.S. policy forged in state “laboratories” is, on the whole, a strength of the American system—especially when those efforts are designed to mitigate specific, identifiable harms to constituents. And those harms sound in the nearly 400 state bills being tracked by Reset Tech, an independent advocacy organization. From children suffering adverse and deadly health effects to unprecedented data privacy abuses, state lawmakers are working overtime to do the job Congress has yet to do despite years of strident hearings promising to hold Big Tech accountable for its culture of negligence.

For now, the least Congress can do is stay out of the way, and few if any of their constituents will complain that they declined to give Big Tech another free pass—let alone ten more years—to do whatever they want with our data, with jobs, with child safety, with national security. No industry enjoys such latitude, and no ordinary citizen of any political party benefits from the technological pandemic that will surely run amok at the speed of AI.

Americans across the political spectrum have seen through Big Tech’s bullshit, and they’re not buying it anymore. Generic promises of “innovation” don’t mean anything to parents trying to navigate the dangers of social media and chat products, or to seniors increasingly vulnerable to scams, or even to business enterprises trying to balance the opportunities of AI with the novel security risks it presents. Nobody is going to thank Congress or the White House for preventing state legislators working to protect children, consumers, and local businesses. The idea of attaching such a provision to the NDAA is as politically naive as it is bad for the country.


Photo courtesy of Eric Feinberg, Coalition for a Safer Web.

Thaler Asks the Court to Make Copyright Policy

Thaler

On October 30, counsel for Dr. Stephen Thaler requested that the U.S. Supreme Court hold its Petition for Certiorari in Thaler v. Perlmutter until after the Court rules on the matter of the dismissal of Copyright Office Director Shira Perlmutter by the White House in May. As the letter states, “The Blanche and Slaughter cases consider whether Director Perlmutter, a named party in the matter for which Dr. Thaler filed a Petition, shall continue in her position at the Copyright Office. As such, it has significant relevance for the outcome of the instant matter, particularly because her termination appears to be related to her stance on copyright for works created by artificial intelligence, which is the focus of Dr. Thaler’s case.”

Notwithstanding the Court’s obligation to decide whether the President has the authority to remove the head of the Copyright Office, there is little more than rumor and assumption that Director Perlmutter was dismissed because of her “stance” on artificial intelligence. And even if she was dismissed on that basis, it should have no bearing on whether the Court will weigh Dr. Thaler’s legal arguments, which are not in conflict with Perlmutter, but rather with the history of copyright law.

Most importantly, the human authorship doctrine, which Thaler seeks to erase, is not a philosophy unique to the views of Director Perlmutter, and the question is entirely separate from those raised in the jurisdictional matters relevant to the Blanche and Slaughter cases. The Court has ample guidance to find that the human authorship doctrine is well-founded in both the statutory and history and tradition of copyright law, and it should decide whether to grant cert on that basis.

Instead, with his request to hold cert, Dr. Thaler implies that the Court should wait to see whether a new appointee, friendly to the interests of AI developers, might replace Director Perlmutter. But even if that will be the result of the Blanche and Slaughter decisions, the Court is aware that 1) the Copyright Office, in its advisory capacity, does not make copyright law; and 2) Thaler’s argument for omitting the human authorship doctrine would have significant statutory, case law, and constitutional implications irrespective of who leads the Office.

Especially after the Court’s decision in Loper Bright Enterprises, overturning Chevron deference, it seems inconsistent to argue that the leadership of an agency, which has never been accorded Chevron, is in any way determinative of the foundational question presented by Dr. Thaler. In my view, the Court should deny cert on the grounds that the D.C. Circuit ruled correctly, but if it agrees to hear the case, it should not be distracted by the notion that copyright’s core principles are mere matters of one party’s opinion.