Big Tech Gets AI Executive Order for Christmas

executive order

Not sure what to get the tech oligarch who has (literally) everything this holiday? Why not his very own Presidential Executive Order titled Ensuring a National Policy Framework for Artificial Intelligence? It’s the latest thing in political theater coming out of the Trump White House—a plan so detrimental in principle to the American public that almost no Members of Congress in either party wanted any piece of it when the plan was proposed as a federal moratorium on state AI regulations. Instead, it looks like David Sacks and Adam Thierer got what they want for Christmas this year.

“This EO reads like a policy paper drafted by Sacks and Thierer in a private room and slid across the Resolute Desk,” states creators’ rights advocate and attorney Chris Castle in a recent blog post. Perhaps the inevitable legal challenges will serve Big Tech’s intent to stonewall compliance with state regulations while they continue to move fast and break more things, or perhaps Congress will act to protect seniors, children, creators, business operators, and pretty much every citizen who may be harmed by unregulated AI.

The stated rationale of the EO proclaims an intent to establish a federal, unified AI policy so that American tech companies can develop their products unburdened by a thicket of various state regulations. Of course, the first problem with both the politics and the operation of the order is that there is no federal AI policy. Thus, the provision that, for instance, the DOJ will establish an AI Litigation Task Force to go after state laws begs the question as to what it can possibly litigate when there are no federal statutes on which to base a complaint. More broadly, the order is ripe for constitutional challenges—Castle discusses five implicated violations—and so, the EO presents yet another opportunity for chaos and lawsuits.

Meanwhile, most states have passed or proposed AI related laws designed to protect citizens from a range of abuses, including scams aimed at seniors and a parade of harmful effects on children and teens. The EO claims to want to address these and other matters, stating…

My Administration must act with the Congress to ensure that there is a minimally burdensome national standard — not 50 discordant State ones.  The resulting framework must forbid State laws that conflict with the policy set forth in this order.  That framework should also ensure that children are protected, censorship is prevented, copyrights are respected, and communities are safeguarded.  A carefully crafted national framework can ensure that the United States wins the AI race, as we must.

But if past is prologue, the tech industry is counting on Congress to do no such thing as adopt a national framework that would resolve any of those concerns, least of all by establishing the one measure the industry has thus far avoided—meaningful liability. I don’t know how many headlines I’ve read, including quotes by lawmakers in both parties, articulating some variation on the theme that “we should not make the mistake with AI that we made with social media.” But that is precisely what the U.S. is doing, and with far more perilous consequences.

In the late 1990s, Congress decided to let the internet service providers operate with little or no oversight because the industry was in its infancy and there was no appetite for “stifling innovation.” Fast forward to the present, and social media platforms are known to be so toxic for young people that Australia is now experimenting with an attempt at an outright ban for users under the age of sixteen.

Beginning in 2017, when journalists and citizens worldwide finally recognized that Silicon Valley is not trustworthy and that social media is not an engine of democracy, Mark Zuckerberg cited AI as the generic solution for myriad ill effects caused by Meta platforms. He was lying, of course. But does anyone of any political persuasion currently believe that AI is not already exacerbating and exceeding the worst aspects of digital life? Because state lawmakers and attorneys general seem clear-eyed on the matter.

A coalition of 42 state AGs led by New Jersey’s Matthew Platkin is demanding that tech companies put an end to harmful chatbot products. In a December 10th announcement on General Platkin’s site, he declares, “As the chief law enforcement officers in our states, we must take action to protect the public from sycophantic and delusional behavior by software that risks breaking a host of criminal and civil laws.”

The premise that America needs an unregulated AI landscape in order to “win” against adversarial China is magical thinking. Because tech giants have no integrity when it comes to self-regulation, it is clear to lawmakers that only the imposition of effective liability will motivate the industry to mitigate unlawful or dangerous design flaws and/or uses of their products. Liability requires regulatory frameworks, and so, the states have done what Congress has thus far failed to do in order to protect American citizens.

Meanwhile, the rhetoric in the EO is consistent with the PR of the industry that insists the public focus on the technology rather than the ethically challenged people behind the technology. Adam Thierer, in a recent post, takes shots at the humanist v. AI perspective, arguing that humanists hypocritically reveal a lack of faith in humans. Of course, he’s right, just not the way he intends, because damn straight millions of us have zero faith in the humans making all the decisions about the development of AI.

We don’t trust the makers of dishwashers to operate without regulations. Why the hell would we give carte blanche to the most arrogant, power hungry, anti-democratic, and greedy boys on Earth playing with a technology that may have existential consequences? That’s not a recipe for winning anything, and we shall she whether the president’s holiday gift to Big Tech leads to anything other than needless litigation when what Americans need are proper safeguards.

 

Americans Won’t Thank Washington for Protecting Big Tech in the NDAA

Unregulated artificial intelligence is a new pandemic. Parents know it. Consumers know it. Educators know it. State lawmakers know it. And Members of Congress know it. Blame who you will for COVID-19 or the price of eggs, but there is no question that the harms of AI, like surveillance pricing for those eggs, are being cooked up in the labs of Silicon Valley. And all the tech billionaires want for Christmas is for the federal government to stop state lawmakers from protecting Americans.

Federal unwillingness to regulate Big Tech is a pox on both parties dating back to at least the Obama administration, and so, in recent years, state lawmakers have moved to protect American citizens from tech oligarchs, who have made it clear through their actions that they bear no moral obligation to make products safe or secure or in compliance with healthy social or democratic principles. That such a virulent agenda might now be snuck into the national dense bill of the United States is an absurdity beyond comprehension.

Having failed to pass a 10-year federal moratorium barring state regulation of artificial intelligence in the “One Big Beautiful Bill”—the Senate rejected the provision 99 to 1—Big Tech billionaires now want Congress to attach the same rule to the must-pass National Defense Authorization Act (NDAA). Among the many hazards inherent to this anti-democratic, anti-American agenda, is that it would in fact undermine national security.

National defense of the U.S. is a holistic calculus that goes beyond soldiers in uniform, weapons on hand, and the PT standards that seem to occupy much of the current secretary’s attention. It is a geographic, economic, intellectual, cultural, technological, and political consideration in which the greatest strength is also the greatest weakness—that it can only truly be eroded from within. Among the core assets of the U.S. is that “States serve as laboratories of democracy,” to quote a November 24 letter signed by dozens of state lawmakers of both parties urging rejection of the AI moratorium…

As state lawmakers and policymakers, we hear regularly from constituents about rising online harms and the growing influence of AI on their lives. In an increasingly fraught digital environment, young people face new risks online, seniors are increasingly targeted by AI-enabled scams, and workers and creators are encountering novel challenges in an AI-driven economy. In the years ahead, AI’s impact will require lawmakers to consider consequential public policy questions, making it essential that states retain the authority to act.

Just as cable or rope strength is derived by combining small strands of material, U.S. policy forged in state “laboratories” is, on the whole, a strength of the American system—especially when those efforts are designed to mitigate specific, identifiable harms to constituents. And those harms sound in the nearly 400 state bills being tracked by Reset Tech, an independent advocacy organization. From children suffering adverse and deadly health effects to unprecedented data privacy abuses, state lawmakers are working overtime to do the job Congress has yet to do despite years of strident hearings promising to hold Big Tech accountable for its culture of negligence.

For now, the least Congress can do is stay out of the way, and few if any of their constituents will complain that they declined to give Big Tech another free pass—let alone ten more years—to do whatever they want with our data, with jobs, with child safety, with national security. No industry enjoys such latitude, and no ordinary citizen of any political party benefits from the technological pandemic that will surely run amok at the speed of AI.

Americans across the political spectrum have seen through Big Tech’s bullshit, and they’re not buying it anymore. Generic promises of “innovation” don’t mean anything to parents trying to navigate the dangers of social media and chat products, or to seniors increasingly vulnerable to scams, or even to business enterprises trying to balance the opportunities of AI with the novel security risks it presents. Nobody is going to thank Congress or the White House for preventing state legislators working to protect children, consumers, and local businesses. The idea of attaching such a provision to the NDAA is as politically naive as it is bad for the country.


Photo courtesy of Eric Feinberg, Coalition for a Safer Web.

Rescuing Democracy from Democratization

democratization

Over the weekend, I had the privilege of participating in the 11th annual Mosaic Conference, organized by the Institute for Intellectual Property and Social Justice (IIPSJ) and hosted by Suffolk University Law School IP Center. Founded by Professor Lateef Mtima at Howard University, IPSJ’s mission is to “…examine intellectual property law and policy—as well as the IP regime in total—to see where full participation of disadvantaged, excluded, and marginalized groups may need redressing.”

A number of subjects were raised that will inspire some future blogs, but in the meantime, the following contains my remarks about the folly of “democratization,” slightly edited for this format:

To quote Professor David Golumbia from his posthumously published book, Cyberlibertarianism:  The Right-Wing Politics of Digital Technology, he writes, “As a rule, ‘democratization’ appears to mean tearing apart institutions, regardless of their nominal functions, including institutions whose purpose is to promote or even embody democracy.”

This is a very difficult moment to talk about knitting people and nations together when the exigent forces are so obviously centrifugal. The historian Joseph Ellis uses that word centrifugal in his book The Quartet to describe the sentiments of the newly independent American states and their reluctance to form the union, and it is hard to believe that that era, when roughly 4 million farmers barely knew the world more than 30 miles beyond their homes, might be compared to our digitally and globally interconnected present. But in my view, Big Tech’s claim to want to “democratize” everything, beginning with cultural works protected by copyright, was and remains catalytic to the struggle we now face to rescue the common cause of democracy.

In the United States, as the republican foundations that even allow room for discussions about social justice are under attack, we confront an authoritarianism that we recognize from history paired with a threat of technological feudalism that is unprecedented. At the same time that civil rights hills attained decades ago must now be reclaimed, rapid technological advancements in artificial intelligence also present new potential modes of injustice, and that challenge has many IP implications.

A simple example I have used recently begins with a friend in medical law who predicts that an AI will soon be better at reading a diagnostic scan than a human radiologist. He’s probably right, and of course, such promises, like improved healthcare, animate the political rhetoric used to promote yet another era of laissez-faire tech policy in the name of undefined “innovation.” As Jaron Lanier wrote in 2010, “People will accept ideas presented in technological form that would be abhorrent in any other form.”  I think this captures why the word innovation is allowed to sweep a million sins under a million rugs.

My friend’s medical example begs critical questions about who will own that technology in a winner-take-all market that often stifles competition, and, therefore, whether the tech will improve healthcare for more people or fewer and on what terms. Alternatively, while AI diagnostic tools might improve the quality of care for the few, will AI actuarial tools be used to deny access to the many? Of course, patent law, about which I know very little, will play a substantial role in the many questions implied by the medical example.

But in a copyright context, Silicon Valley, with the help of far too many IP academics, promoted the “democratization” of access to, and use of, cultural works via the allegedly free platforms. This egalitarian rhetoric was so appealing that even many professional creators echoed the sentiment and bought into the promise of working around traditional gatekeepers and forging more “organic” connections with fans. Today, fewer professional creators fare as well as their “pre-democratized” forerunners.

In that PR campaign funded by Silicon Valley, the making available rights and derivative works right in particular were portrayed as anachronistic principles exclusively serving Big Media “landlords” controlling all culture and information. And while I might join certain criticisms of Big Media, especially consolidation of the industry, the “landlord” metaphor was and still is applied even to the independent artist who might presume to enforce her copyright rights.

More broadly, the underlying hypocrisy of this rhetoric is that “landlord,” of all words, is a far more apt description for the owners of virtual real estate, where information does not flow freely but is manipulated by algorithms designed to maximize and monetize even the most toxic forms of engagement. And of course, this includes both rampant copyright infringement and legal uploads of works that have now been harvested for the purpose of training artificial intelligence.

With generative AI, Big Tech—again with the help of many in IP academia—now promotes the alleged value of “democratizing” the production of works, finally revealing democratization as the anti-humanist and, therefore, anti-democratic term that it truly is. We have several current examples in amicus briefs, academic papers, and even one court’s opinion in the Bartz case, in which parties argue that mass production of material by machines somehow fulfills the original purpose of copyright law. For those following Thaler v. Perlmutter, Dr. Thaler’s recent petition for cert at the U.S. Supreme Court argues that the Copyright Office’s affirmation of the human authorship requirement “defies the constitutional goals from which Congress was empowered to create copyright, namely, the creation and dissemination of creative works.”

This is wrongly stated, but the attempt to undermine the human authorship doctrine is, of course, consistent with Big Tech’s ideological view that individual human agency is an outdated nuisance—a bug to program around in pursuit of a grand, tech-utopian dream. Or to put it another way, the scorn for human authorship is in harmony with Mark Zuckerberg recently proclaiming that the future of companionship is one in which we have more robot friends than human ones.

Long after the dust settles on the legality of AI model training with protected works, fundamental questions of social justice in a world with generative AI will need to be addressed. In addition to many examples in which these products are already causing social harm—most acutely adverse psychological effects among children and teens—generative AI can potentially swallow, or perhaps smother, economic opportunities for diversity of expression, perhaps even accelerating the current trend of government censorship.

In that regard, I find it astounding that the copyright skeptics in academia, generally aligned with the political left, promoted democratization by portraying copyright as a tool of censorship rather than as a mode of empowerment for authors. While the free market is not a perfect answer to all challenges, the spike in sales of Art Spiegelman’s Maus after it was banned in 2022, or even the market’s response forcing the restoration of Jimmy Kimmel are, in my view, examples of why the speech right and copyright more often act in concert as a force for democratic principles.

Notably, the IP skeptics have inveighed against strong copyright rights by arguing social justice principles, as if, for instance, the right of access without copyright’s boundaries is the moral equivalent of the right to read campaign now confronting real censorship. Moreover, social justice for the artist is often omitted by that school’s overstating a purely utilitarian foundation for copyright. Not only is that perspective belied by history, but it seems to me that for an IP regime to encompass social justice values, some natural rights principles must apply.

In fact, in this light, I think it is noteworthy that rather than pursue a federal publicity right in response to AI’S potential to replicate anyone’s likeness, the NO FAKES Act currently before the U.S. Congress borrows principles from trademark, copyright, and right of publicity to create a novel IP right in one’s voice and likeness. Perhaps this moves the U.S. one step closer to some of the moral rights principles that animate copyright law in other countries.

It is no surprise that the tech industry so aggressively attacked intellectual property rights by selling the chimera of “democratization.” IP rights, at their best, foster an expansive and diverse world of competing ideas, whereas Big Tech’s interests—and the interests of authoritarians—are best served by organizing people into bunkers of competing realities. This epistemic crisis, I firmly believe, explains the wanton destruction of so many democratic institutions. And with generative AI, of course, it is easy to see how mass automation of synthetic material, posing as creative and informative works, is likely to exacerbate this problem.

Democratization is a beguiling term that no longer describes movement toward democratic forms. It exploits the language of democracy to mask an ideological contempt for democratic institutions and individual agency. It is a centrifugal force driving people, communities, and nations apart—a path to social, economic, and political anarchy, where bullies win and justice does not exist. Consequently, I would ask those in IP academia to be vigilant about the distinction between democratization and democracy and to push back on the rhetoric of the former in the hope that we can still rescue the latter.