Truth Dies in Broad Daylight

Democracy dies in darkness according to the motto of the Washington Post, and this is, of course, just one of many phrases reciting the axiomatic theme that credible and responsibly reported information is the blood of a democratic society like the United States. If true, then why has the “information age” brought democracy itself to the brink of destruction?  There are many answers, including from those who would say that the question itself is alarmist—that, for instance, the “democracy in peril” narrative is a talking point of the political left with no foundation in evidence. But ain’t that the rub? Have we not crossed the event horizon of an epistemic crisis?

It bears repeating that a healthy democracy not only tolerates, but requires, a debate of competing ideas; but thanks largely to the major internet platforms, society has devolved to a shouting match of competing realities. No technological singularity required. We have already carved out a point in our little corner of spacetime that is dense enough to prevent truth from escaping. It may be self-evident that truth dies passively in silence, but truth can also be trampled to death by noise, and how could “democratizing information” ever have produced anything but a cacophony?

In a recent editorial for the Los Angeles Times, Anita Chabria asks Why is it OK for rich guys to steal my work? She writes…

Retail theft is causing a civic meltdown and inspiring a ballot measure to incarcerate repeat toothpaste thieves.

But billionaire tech bros dismantling democracy for profit, stealing thousands of times a minute by selling advertising against something they don’t own? That barely gets a shrug, even as more media professionals are laid off, more publications close, and reliable information becomes so scarce and hard to spot that truth itself has become political.

Some might argue that news organizations have lost so much credibility that it hardly matters, and I cannot deny that I have read my share of careless articles under the imprimatur of respected brands, including the WaPo. But notwithstanding cultural and social changes that ebb and flow through any industry, the bottom line is that good investigative journalism is expensive, highly skilled, and time consuming, and the internet industry has only served to make those obstacles larger, if not insurmountable.

First, social media fostered, and still perpetuates, an illusion that “citizen journalists” and raving pundits consistently uncover hidden truths which are obfuscated by the mainstream media. Second, social media demands feeding the beast 24/7, which forces the traditional news organization to prioritize speed over quality, thereby often fulfilling the prophecy that mainstream news is untrustworthy. And finally, the major social platforms resist paying for the news material they exploit for profit. In combination, how can these forces not cause a downward spiral in professional journalism, including the layoffs now being reported? And that’s before we truly see AI alter the landscape.

While it is impossible not to point to Trumpism as the paradigmatic—and potentially fatal—symptom of rampant conspiracy-mongering, the folly of democratizing information is shared across the political spectrum. The internet industry told the world that their platforms were the antidote to media conglomerates—the proverbial “gatekeepers,” who controlled, and even buried, the information to which people are entitled. And thus, Big Tech’s assault on copyright law often rode atop the half-baked slogan that “information wants to be free” in both senses—liberated and gratis. And everyone—nearly everyone—believed that bullshit.

Although copyright is commonly associated with creative and entertainment material, it was nonfiction works, including journalism, that were at the center of the constitutional framers’ attention when they drafted the “progress clause” in Article II. There’s a reason why that clause says, “to promote the progress of science,” and in one of my favorite papers about the adoption of copyright at the founding period, Professor Jane Ginsburg notes, “Petitions to Congress before enactment of the first copyright statute sought exclusive privileges for works overwhelmingly instructional in character.”

A century later, copyright protection would encompass a broad range of creative and performing arts, but at the outset, the framers understood that the Republic would fail in persistent darkness. Thus, the speech right, the press right, and copyright can be seen as working in concert toward the hope that future generations would have the “science” necessary to sustain the American experiment. Now, just over 230 years since the first Copyright Act and the Bill of Rights, I am hardly alone in wondering whether that “science” is lost, symbolized by the fourth estate shedding 500 jobs in January alone.

In 2021, Senator Klobuchar first introduced the Journalism Competition Protection Act (JCPA), which would provide a limited exemption to antitrust prohibitions against collective bargaining among news media organizations. Passage of the JCPA would enable news media companies to negotiate terms with giants like Meta, Google, et al. for licensing news content shared on those platforms, and Chabria cites a study from the University of Houston, which states that, with passage of the JCPA, the major platforms would owe news organizations between $11.9 billion and $13.9 billion per year. So, of course, the tech giants have used their lobbying power to block the bill.

Meanwhile, Big Tech continues to argue that they should not pay news organizations anything because their platforms “drive traffic” to the news channels. Artists will recognize this as the “exposure” rationale for piracy, and it takes some chutzpah to keep peddling this nonsense against a backdrop of layoffs and closings. Because it doesn’t take an economist to know that traffic alone does not pay for overhead and salaries—and that’s even if Google et al. actually increase traffic relative to pre-internet readership.

What we know for sure is that a democracy without a robust and free press is in danger of no longer remaining a democracy, and we know that news organizations have historically struggled to be financially sustainable. As the internet industry has done with music, motion pictures, literary works, etc., they sold the promise of access to news and information while siphoning the revenue that pays people to produce that material in the first place. And as we are witnessing in real-time, the vacuum is filled with charlatans, liars, cowards, and thieves. Thus, the proverbial “sunlight” promised by Big Tech is not a disinfectant, but a poorly made pesticide that animates the weeds and kills all the fruit.


Photo source by: Mediaphotos

In hearing with Big Tech, senators make headlines, but can they make headway?

On Wednesday, January 31, the Senate Judiciary Committee presided over a dramatic hearing titled Big Tech and the Online Child Sexual Exploitation Crisis. The gallery was filled with family members representing young victims of sexual exploitation, drug-related deaths, and adverse mental health effects of social media that can lead to chronic illness and suicide. The witnesses who provided testimony and faced often tense grilling by senators included Mark Zuckerberg, CEO of Meta; Linda Yaccarino, CEO of X Corp; Shou Chew, CEO of TikTok; Evan Spiegel, CEO of Snap Inc.; and Jason Citron, CEO of Discord Inc.

By now, many highlights have been published in the press and on social media, including Senator Graham’s opening salvo telling the witnesses they “have blood on their hands.” There was also Sen. Hawley’s rhetorical grilling of Zuckerberg, asking whether he had personally created a fund out of his billions to compensate any families. And then, there was Sen. Whitehouse, who stated quite simply, “We’re here because your platforms really suck at policing themselves,” thereby summarizing a bipartisan sentiment that has produced five bills passed by this committee alone.

Dramatic moments aside, though, what, if anything, will get done this year? As committee members themselves noted throughout the hearing, this is a road much travelled, and little has been accomplished, either through legislation or as voluntary measures by the platforms, to address the kind of harms at issue. Big Tech’s “tobacco moment” was supposed to be in 2021 when key witnesses and whistleblowers testified that, yes, social media platforms can cause harm to users, are designed to be addictive, and that industry executives put revenue ahead of safety.

Notwithstanding Senator Cruz and other Republicans blasting Mr. Chew over the valid but separate matter of TikTok’s alleged obligations to censor and/or provide information to the Chinese Communist Party, nearly every senator reiterated a theme of rare unanimity on the central issues before the committee. There is, of course, no political downside for either party when the issues involve children, sexual exploitation, suicide, and fentanyl, and the target is Big Tech. There should be no doubt that the intent to legislate is real, but several senators alluded to the platforms’ lack of cooperation and their lobbying power to avoid federal intervention.

For instance, among the bills cited and not wholly supported by online platforms, the SHIELD Act would criminalize the nonconsensual distribution of intimate visual depictions of persons—a subject that has been on the Hill since Rep. Speier first introduced a bill in 2015. Now, with advancements in AI tools that can be used to generate synthetic sexual material using the likeness of a real person (e.g., what happened to Taylor Swift), the issue is more complicated. And by my count, there are at least two House bills responding to AI as a method to achieve potentially more harmful results than the distribution of existing recorded material.[1]

Presumably, Congress will need to harmonize legislative efforts where there appears to be some redundancy in the intent to mitigate harm based on the nature of certain material and/or the means of production and distribution of that material. Moreover, the various issues raised in the hearing imply distinct forms of accountability (e.g., the design of a platform potentially harming mental health; the handling of material uploaded by users; or platforms being more transparent about negative effects).

In a future post, I will try to summarize all the proposed legislation designed to address specific harms caused or exacerbated by social media platforms. But one subject raised on Wednesday, and which must come first, is revision of Section 230 of the Communications Decency Act. As discussed here many times, Section 230 has been improperly read by the courts as a blanket immunity from civil litigation for online service providers, regardless of how irresponsibly the operators may address harmful material uploaded by a user of the platform.

Section 230 Front and Center

Sen. Graham declared that it’s time to repeal Section 230, while other senators were more moderated, alluding to revision of the law. Regardless, there should be little doubt that Congress supports the premise that online platforms must be subject to litigation to incentivize more effective cooperation in addressing various harms. Most immediately, revision of 230 must make clear that platforms are not exempt from court orders to remove material that is harmful to the aggrieved party.

One of the most infuriating aspects of misapplication of 230 to date is not simply that the platform is never liable for the harm (because it may not be), but that a platform can avoid complying with injunctive relief—often little more than having the basic decency to remove material that is shown to be harmful. As Sen. Whitehouse made clear, the court is the venue for determining liability and remedies, and several of his colleagues noted that it is simply absurd that one multi-billion-dollar industry is automatically excused from those procedures.

Thus, as a foundational matter, it seems essential that Section 230 is substantially revised to ensure that people, like the families represented at the hearing, can pursue legal action without having the court automatically dismiss the claim. Of course, sound reform of 230 must reject the rhetoric of some lawmakers, including Sen. Cruz, who have muddied the waters with unfounded and unhelpful allegations of platform political bias. If nothing else, alleged viewpoint bias is not a subject of Section 230, and if lawmakers really want to help the kids, they must remain focused on ensuring that a family can have its day in court.

So, as stated, we’ve been here before. Wednesday’s hearing provided a pretty good highlights reel, but let’s see if this year, it can finally lead to any tangible solutions.


[1] Preventing Deepfakes of Intimate Images Act, and the No AI FRAUD Act.

Maybe Don’t Talk About Your CCB Claim on Social Media

The copyright small-claim alternative, adjudicated by the Copyright Claims Board (CCB), was intentionally designed to accommodate pro se participants, meaning that both claimants and respondents can represent themselves without hiring attorneys. After all, the foundation of small claims court or alternative dispute resolution is to save money. And indeed, we are seeing some early pro se claimants file complaints with the CCB, which began accepting claims on June 16th.

It occurred to me while co-moderating a copyright page on Facebook because, of course, social media encourages a habit of saying or asking everything that comes to mind. But one aspect of legal training the copyright owner/claimants, or for that matter defendants, likely do not have is the discipline to keep mum about an active case. Or at least what should and should not be discussed publicly.

Asking questions or making statements about administrative procedures related to the CCB are safe topics to discuss in public, but parties to a case should remember that it is a legal proceeding with a discovery process. That means anything you say about the facts pertaining to the case itself—including intentions, timelines, beliefs, etc.—may be discoverable and may be entered into evidence by the opposing party. And announcing, griping, gloating, or just describing these matters on social media makes discovery very easy for the opposing party.

This is not to suggest that either claimants or respondents are going to lie or have much to hide of any relevance to a typical CCB case. But if you are a party on either side, it is just good practice to do what an attorney would tell you to do and simply not talk about the case publicly until it’s resolved.

Keeping this discipline could prove difficult for some. Both alleged infringers and anti-copyright ideologues are known to at least insult, if not harass, copyright owners looking to enforce their rights. “Greedy” may be the kindest thing someone calls you, but don’t take the bait, don’t feed the troll, and don’t talk about your case until it’s over. By the same token, if you’re the claimant and you’ve filed a CCB claim, it’s probably not a good idea to also engage in that odd form of digital-age justice generally called “shaming.”

The copyright antagonists want to see the CCB fail. As copyright owners and advocates, we want the small-claim alternative to work, and work in a serious and fair manner grounded in the merits of claims and defenses. As such, both for your own sake and the overall effectiveness of a brand-new system, if you are party to a claim, it’s a good idea to exercise some social media discipline and keep most of the conversation about your case to yourself.