Americans Won’t Thank Washington for Protecting Big Tech in the NDAA

Unregulated artificial intelligence is a new pandemic. Parents know it. Consumers know it. Educators know it. State lawmakers know it. And Members of Congress know it. Blame who you will for COVID-19 or the price of eggs, but there is no question that the harms of AI, like surveillance pricing for those eggs, are being cooked up in the labs of Silicon Valley. And all the tech billionaires want for Christmas is for the federal government to stop state lawmakers from protecting Americans.

Federal unwillingness to regulate Big Tech is a pox on both parties dating back to at least the Obama administration, and so, in recent years, state lawmakers have moved to protect American citizens from tech oligarchs, who have made it clear through their actions that they bear no moral obligation to make products safe or secure or in compliance with healthy social or democratic principles. That such a virulent agenda might now be snuck into the national dense bill of the United States is an absurdity beyond comprehension.

Having failed to pass a 10-year federal moratorium barring state regulation of artificial intelligence in the “One Big Beautiful Bill”—the Senate rejected the provision 99 to 1—Big Tech billionaires now want Congress to attach the same rule to the must-pass National Defense Authorization Act (NDAA). Among the many hazards inherent to this anti-democratic, anti-American agenda, is that it would in fact undermine national security.

National defense of the U.S. is a holistic calculus that goes beyond soldiers in uniform, weapons on hand, and the PT standards that seem to occupy much of the current secretary’s attention. It is a geographic, economic, intellectual, cultural, technological, and political consideration in which the greatest strength is also the greatest weakness—that it can only truly be eroded from within. Among the core assets of the U.S. is that “States serve as laboratories of democracy,” to quote a November 24 letter signed by dozens of state lawmakers of both parties urging rejection of the AI moratorium…

As state lawmakers and policymakers, we hear regularly from constituents about rising online harms and the growing influence of AI on their lives. In an increasingly fraught digital environment, young people face new risks online, seniors are increasingly targeted by AI-enabled scams, and workers and creators are encountering novel challenges in an AI-driven economy. In the years ahead, AI’s impact will require lawmakers to consider consequential public policy questions, making it essential that states retain the authority to act.

Just as cable or rope strength is derived by combining small strands of material, U.S. policy forged in state “laboratories” is, on the whole, a strength of the American system—especially when those efforts are designed to mitigate specific, identifiable harms to constituents. And those harms sound in the nearly 400 state bills being tracked by Reset Tech, an independent advocacy organization. From children suffering adverse and deadly health effects to unprecedented data privacy abuses, state lawmakers are working overtime to do the job Congress has yet to do despite years of strident hearings promising to hold Big Tech accountable for its culture of negligence.

For now, the least Congress can do is stay out of the way, and few if any of their constituents will complain that they declined to give Big Tech another free pass—let alone ten more years—to do whatever they want with our data, with jobs, with child safety, with national security. No industry enjoys such latitude, and no ordinary citizen of any political party benefits from the technological pandemic that will surely run amok at the speed of AI.

Americans across the political spectrum have seen through Big Tech’s bullshit, and they’re not buying it anymore. Generic promises of “innovation” don’t mean anything to parents trying to navigate the dangers of social media and chat products, or to seniors increasingly vulnerable to scams, or even to business enterprises trying to balance the opportunities of AI with the novel security risks it presents. Nobody is going to thank Congress or the White House for preventing state legislators working to protect children, consumers, and local businesses. The idea of attaching such a provision to the NDAA is as politically naive as it is bad for the country.


Photo courtesy of Eric Feinberg, Coalition for a Safer Web.

Thaler Asks the Court to Make Copyright Policy

Thaler

On October 30, counsel for Dr. Stephen Thaler requested that the U.S. Supreme Court hold its Petition for Certiorari in Thaler v. Perlmutter until after the Court rules on the matter of the dismissal of Copyright Office Director Shira Perlmutter by the White House in May. As the letter states, “The Blanche and Slaughter cases consider whether Director Perlmutter, a named party in the matter for which Dr. Thaler filed a Petition, shall continue in her position at the Copyright Office. As such, it has significant relevance for the outcome of the instant matter, particularly because her termination appears to be related to her stance on copyright for works created by artificial intelligence, which is the focus of Dr. Thaler’s case.”

Notwithstanding the Court’s obligation to decide whether the President has the authority to remove the head of the Copyright Office, there is little more than rumor and assumption that Director Perlmutter was dismissed because of her “stance” on artificial intelligence. And even if she was dismissed on that basis, it should have no bearing on whether the Court will weigh Dr. Thaler’s legal arguments, which are not in conflict with Perlmutter, but rather with the history of copyright law.

Most importantly, the human authorship doctrine, which Thaler seeks to erase, is not a philosophy unique to the views of Director Perlmutter, and the question is entirely separate from those raised in the jurisdictional matters relevant to the Blanche and Slaughter cases. The Court has ample guidance to find that the human authorship doctrine is well-founded in both the statutory and history and tradition of copyright law, and it should decide whether to grant cert on that basis.

Instead, with his request to hold cert, Dr. Thaler implies that the Court should wait to see whether a new appointee, friendly to the interests of AI developers, might replace Director Perlmutter. But even if that will be the result of the Blanche and Slaughter decisions, the Court is aware that 1) the Copyright Office, in its advisory capacity, does not make copyright law; and 2) Thaler’s argument for omitting the human authorship doctrine would have significant statutory, case law, and constitutional implications irrespective of who leads the Office.

Especially after the Court’s decision in Loper Bright Enterprises, overturning Chevron deference, it seems inconsistent to argue that the leadership of an agency, which has never been accorded Chevron, is in any way determinative of the foundational question presented by Dr. Thaler. In my view, the Court should deny cert on the grounds that the D.C. Circuit ruled correctly, but if it agrees to hear the case, it should not be distracted by the notion that copyright’s core principles are mere matters of one party’s opinion.

Rescuing Democracy from Democratization

democratization

Over the weekend, I had the privilege of participating in the 11th annual Mosaic Conference, organized by the Institute for Intellectual Property and Social Justice (IIPSJ) and hosted by Suffolk University Law School IP Center. Founded by Professor Lateef Mtima at Howard University, IPSJ’s mission is to “…examine intellectual property law and policy—as well as the IP regime in total—to see where full participation of disadvantaged, excluded, and marginalized groups may need redressing.”

A number of subjects were raised that will inspire some future blogs, but in the meantime, the following contains my remarks about the folly of “democratization,” slightly edited for this format:

To quote Professor David Golumbia from his posthumously published book, Cyberlibertarianism:  The Right-Wing Politics of Digital Technology, he writes, “As a rule, ‘democratization’ appears to mean tearing apart institutions, regardless of their nominal functions, including institutions whose purpose is to promote or even embody democracy.”

This is a very difficult moment to talk about knitting people and nations together when the exigent forces are so obviously centrifugal. The historian Joseph Ellis uses that word centrifugal in his book The Quartet to describe the sentiments of the newly independent American states and their reluctance to form the union, and it is hard to believe that that era, when roughly 4 million farmers barely knew the world more than 30 miles beyond their homes, might be compared to our digitally and globally interconnected present. But in my view, Big Tech’s claim to want to “democratize” everything, beginning with cultural works protected by copyright, was and remains catalytic to the struggle we now face to rescue the common cause of democracy.

In the United States, as the republican foundations that even allow room for discussions about social justice are under attack, we confront an authoritarianism that we recognize from history paired with a threat of technological feudalism that is unprecedented. At the same time that civil rights hills attained decades ago must now be reclaimed, rapid technological advancements in artificial intelligence also present new potential modes of injustice, and that challenge has many IP implications.

A simple example I have used recently begins with a friend in medical law who predicts that an AI will soon be better at reading a diagnostic scan than a human radiologist. He’s probably right, and of course, such promises, like improved healthcare, animate the political rhetoric used to promote yet another era of laissez-faire tech policy in the name of undefined “innovation.” As Jaron Lanier wrote in 2010, “People will accept ideas presented in technological form that would be abhorrent in any other form.”  I think this captures why the word innovation is allowed to sweep a million sins under a million rugs.

My friend’s medical example begs critical questions about who will own that technology in a winner-take-all market that often stifles competition, and, therefore, whether the tech will improve healthcare for more people or fewer and on what terms. Alternatively, while AI diagnostic tools might improve the quality of care for the few, will AI actuarial tools be used to deny access to the many? Of course, patent law, about which I know very little, will play a substantial role in the many questions implied by the medical example.

But in a copyright context, Silicon Valley, with the help of far too many IP academics, promoted the “democratization” of access to, and use of, cultural works via the allegedly free platforms. This egalitarian rhetoric was so appealing that even many professional creators echoed the sentiment and bought into the promise of working around traditional gatekeepers and forging more “organic” connections with fans. Today, fewer professional creators fare as well as their “pre-democratized” forerunners.

In that PR campaign funded by Silicon Valley, the making available rights and derivative works right in particular were portrayed as anachronistic principles exclusively serving Big Media “landlords” controlling all culture and information. And while I might join certain criticisms of Big Media, especially consolidation of the industry, the “landlord” metaphor was and still is applied even to the independent artist who might presume to enforce her copyright rights.

More broadly, the underlying hypocrisy of this rhetoric is that “landlord,” of all words, is a far more apt description for the owners of virtual real estate, where information does not flow freely but is manipulated by algorithms designed to maximize and monetize even the most toxic forms of engagement. And of course, this includes both rampant copyright infringement and legal uploads of works that have now been harvested for the purpose of training artificial intelligence.

With generative AI, Big Tech—again with the help of many in IP academia—now promotes the alleged value of “democratizing” the production of works, finally revealing democratization as the anti-humanist and, therefore, anti-democratic term that it truly is. We have several current examples in amicus briefs, academic papers, and even one court’s opinion in the Bartz case, in which parties argue that mass production of material by machines somehow fulfills the original purpose of copyright law. For those following Thaler v. Perlmutter, Dr. Thaler’s recent petition for cert at the U.S. Supreme Court argues that the Copyright Office’s affirmation of the human authorship requirement “defies the constitutional goals from which Congress was empowered to create copyright, namely, the creation and dissemination of creative works.”

This is wrongly stated, but the attempt to undermine the human authorship doctrine is, of course, consistent with Big Tech’s ideological view that individual human agency is an outdated nuisance—a bug to program around in pursuit of a grand, tech-utopian dream. Or to put it another way, the scorn for human authorship is in harmony with Mark Zuckerberg recently proclaiming that the future of companionship is one in which we have more robot friends than human ones.

Long after the dust settles on the legality of AI model training with protected works, fundamental questions of social justice in a world with generative AI will need to be addressed. In addition to many examples in which these products are already causing social harm—most acutely adverse psychological effects among children and teens—generative AI can potentially swallow, or perhaps smother, economic opportunities for diversity of expression, perhaps even accelerating the current trend of government censorship.

In that regard, I find it astounding that the copyright skeptics in academia, generally aligned with the political left, promoted democratization by portraying copyright as a tool of censorship rather than as a mode of empowerment for authors. While the free market is not a perfect answer to all challenges, the spike in sales of Art Spiegelman’s Maus after it was banned in 2022, or even the market’s response forcing the restoration of Jimmy Kimmel are, in my view, examples of why the speech right and copyright more often act in concert as a force for democratic principles.

Notably, the IP skeptics have inveighed against strong copyright rights by arguing social justice principles, as if, for instance, the right of access without copyright’s boundaries is the moral equivalent of the right to read campaign now confronting real censorship. Moreover, social justice for the artist is often omitted by that school’s overstating a purely utilitarian foundation for copyright. Not only is that perspective belied by history, but it seems to me that for an IP regime to encompass social justice values, some natural rights principles must apply.

In fact, in this light, I think it is noteworthy that rather than pursue a federal publicity right in response to AI’S potential to replicate anyone’s likeness, the NO FAKES Act currently before the U.S. Congress borrows principles from trademark, copyright, and right of publicity to create a novel IP right in one’s voice and likeness. Perhaps this moves the U.S. one step closer to some of the moral rights principles that animate copyright law in other countries.

It is no surprise that the tech industry so aggressively attacked intellectual property rights by selling the chimera of “democratization.” IP rights, at their best, foster an expansive and diverse world of competing ideas, whereas Big Tech’s interests—and the interests of authoritarians—are best served by organizing people into bunkers of competing realities. This epistemic crisis, I firmly believe, explains the wanton destruction of so many democratic institutions. And with generative AI, of course, it is easy to see how mass automation of synthetic material, posing as creative and informative works, is likely to exacerbate this problem.

Democratization is a beguiling term that no longer describes movement toward democratic forms. It exploits the language of democracy to mask an ideological contempt for democratic institutions and individual agency. It is a centrifugal force driving people, communities, and nations apart—a path to social, economic, and political anarchy, where bullies win and justice does not exist. Consequently, I would ask those in IP academia to be vigilant about the distinction between democratization and democracy and to push back on the rhetoric of the former in the hope that we can still rescue the latter.