Cyber Civil Rights Initiative Files Common Sense Brief in Major Section 230 Case

In my recent post about Gonzalez v. Google—the Section 230 case granted cert by the Supreme Court—I expressed the view that the word “recommendation” is too charming to describe the interaction between social media algorithms and many users’ experiences. Systems capable of reinforcing suicidal ideations in a teenager or stoking violent instincts in a potential terrorist cannot sensibly be described as “recommending” the kind of content associated with these and other dangerous outcomes. And although petitioner Gonzalez specifically asks the Court to decide whether “algorithmic recommendation” is shielded from liability under Section 230 of the Communications Decency Act, the amicus brief filed by the Cyber Civil Rights Initiative (CCRI) and Legal Scholars asks the Court for a more nuanced reading of the question. From the brief…

Amici emphasize that this case cannot be correctly decided by focusing on “traditional editorial functions” or by trying to craft a general rule about whether “targeted algorithms” fall within Section 230’s immunity provision…. To categorically deny immunity to an ICSP for using targeted algorithms would directly contradict Section 230(c)(2) and finds no support in Section 230(c)(1). Such an interpretation would also have a devastating impact on the victims of online abuse by dissuading Good Samaritan ICSPs from using targeted algorithms to remove, restrict, or otherwise reduce the accessibility of harmful material, including nonconsensual pornography.

CCRI, which works to address and remedy various forms of harassment and civil rights abuses committed via interactive computer service providers (ICSPs), asks the Court to restore the textually coherent and common-sense meaning of Section 230, which was written to encourage service providers to mitigate harmful material—not to unconditionally immunize them from liability for hosting it. For almost twenty years, lower courts have consistently misinterpreted the purpose of 230 to provide automatic immunity just so long as the material at issue is posted by someone other than the platform owner/managers.

This chronic misreading of Section 230 results in two significant problems: 1) dismissal at the summary judgment stage of any claim in which an ICSP may be liable; and 2) failures to provide injunctive relief where the ICSP is not liable but may be ordered to remove material which the court agrees is causing harm to a complainant. As things stand, a site that intentionally trades in harmful material is immunized, and so is a site that unintentionally hosts harmful material but elects not to remove the material for its own reasons. The rationales vary as to why “neutral” platform operators often refuse to remove material alleged, or even proven, to be harmful, but for too long, the industry has echoed the absurd premise that removing anything from a social platform is incompatible with “a free and open internet.”

Section 230 Is (Was) Not Novel Legislative Territory

The CCRI brief is so firmly grounded in the legislative history of Section 230 that it is difficult to fathom how any court—let alone many courts—strayed so far, and for so long, from a plain-text reading of the statute. In describing the common-law (i.e., not groundbreaking) underpinnings of Section 230, for instance, CCRI cites the distinction between a “publisher” and a “distributor” of defamatory material thus:

… “[d]efamation at common law distinguished between publisher and distributor liability.” While a publisher was strictly liable for carrying defamatory matter, a distributor who only “delivers or transmits defamatory matter published by a third person is subject to liability if, but only if, he knows or has reason to know of its defamatory character.” [Emphasis added.]

This is common sense well founded in law. If an individual or a business has knowledge that he/it is facilitating harm caused by a separate, directly liable party, that facilitation may rise to a secondary civil or criminal liability. The newsstand operator is not liable for inadvertently selling adult magazines containing underage models, but if he knows about it, he is probably—and deservedly—in big trouble.

This basic principle of secondary liability applies everywhere except for internet platforms—and only because the courts have so thoroughly misconstrued Section 230 by conflating two sub-sections of the statute, which are meant to be read independently. As the CCRI brief explains, 230(c)(1) states that merely providing access to third-party content (e.g., YouTube hosting a video uploaded by a user) does not make the ICSP a “publisher” or “speaker.” Then, 230(c)(2) states that voluntarily making a good-faith effort to remove objectionable material does not make the ICSP generally liable as a “publisher” of everything it hosts.

“Cases reading Section 230 to have a broader preemptive effect than provided for in (c)(1) and (c)(2) have departed from the statutory text,” states the CCRI brief. It emphasizes the fact that “distributor liability” is envisioned by Section 230(c)(1) where the ICSP has knowledge of the harmful material, and it argues that the function of Section 230(c)(2) is legislatively “parallel” to state Good Samaritan laws written to immunize ordinary citizens against unreasonable liability when we make good-faith efforts to help someone in need of assistance. Prior to these laws, an individual intending to render aid to a stranger could be held liable for inadvertently causing harm, but as the CCRI brief states:

… like state Good Samaritan statutes, Section 230(c)(2) includes important limits to the immunity it provides. First, it does not apply when an ICSP is already under an existing duty to act—i.e., where its action to restrict access to objectionable third-party content is not “voluntary.” Nor does it immunize ICSPs that do nothing to address harm or that contribute to or profit from harm.

Again, this is just common sense grounded in common law that applies everywhere except the internet. If one does not initiate illegal activity but seeks to benefit from that activity, one may be liable for the harm caused. It is inconceivable that Congress ever intended to exempt the multi-billion-dollar internet industry from this longstanding principle. And that’s because it intended no such thing.

It will be interesting to see what amici who file on behalf of Google will argue in this case. Other than the usual panegyrics to the internet, I am curious to see whether, for instance, the EFF will have anything coherent to say in defense of two decades’ worth of textual misreading. Typically, defenders of the status quo reading of Section 230 write about threats to “the internet” as if a lack of immunity automatically results in a finding of liability and damages. But on the contrary, a proper reading of the law simply means that an ICSP cannot so easily dismiss every claim and that the injured party is allowed her day in court to prove whether a platform had, or has, a duty to act. Litigating against tech giants is hardly a fair fight in the first place, and ICSPs neither need nor deserve an unconditional immunity that exists nowhere else in the justice system.

Art is Human

A few months ago, I attended a local event, where photographer Doug Menuez spoke about his project “Wild Place: The People of Kingston, NY.” The description on his website begins . . .

Wild Place is the English translation of Wiltwyck, the original name given to Kingston, New York, in 1661 by Peter Stuyvesant and the Dutch who were facing fierce resistance from local Native Americans. My wife Tereza and I recently moved back to Kingston after a decade away and can see lots of changes, with more to come. It seems like an important moment.

Combining portrait and documentary in both photographs and short video interviews, “Wild Place” presents contemporary Kingston through Menuez’s view of its artists, activists, entrepreneurs, community leaders, and—not surprisingly—people who fit all those descriptions. While listening to Doug talk about the project, I was reminded why I care so much about artists and their work:  because through art and artists, we renew profound, even cathartic, connections to what it means to be human and, in turn, reinforce the reasons why humans bother to make art. My schedule does not permit frequent attendance at such events, but listening to Doug’s articulate, thoughtful, even spiritual discussion about his work was as close I come to listening to a sermon.

In my last post commenting on visual works generators like DALL-E, et al., I reiterated the view held by many that the notion of “AI art” is oxymoronic—as devoid of meaning as having a machine perform a religious rite for its human owner. Whatever creative work without humans ought to be called, it is not art. As such, I maintain that nobody will be interested in works made exclusively by machines for very long and that the current buzz about these generative algorithms may ebb quickly into the sea of trends to swirl in gooey eddies of crypto and NFTs.

This is not to suggest that creators and advocates of creators’ rights should ignore current threats to human artists, or that generative AIs do not preface an even darker version of the “information age” than the present state of madness. In a Facebook post that has been widely shared, a philosophy professor describes catching the first student in his class to use a bot called ChatGPT to write an assigned essay about David Hume. “The essay confidently and thoroughly described Hume’s views on the paradox of horror in a way that were [sic] thoroughly wrong,” the professor writes. “It did say some true things about Hume, and it knew what the paradox of horror was, but it was just bullshitting after that. To someone who didn’t know what Hume would say about the paradox, it was perfectly readable—even compelling.”

That last sentence is unsettling in a world buffeted by conspiracy mongers and alternative facts. No Alex Jones or Donald Trump or Stewart Rhodes required. The next cult figure can be an algorithm producing a “readable—even compelling” restatement on any matter from the Enlightenment to the suppression of viral disease. It is intriguing, if depressing, that a college student attempted to cheat by means of an AI to avoid honest engagement with Hume’s essay Of Tragedy, which contains the following observation:

We find that common liars always magnify, in their narrations, all kinds of danger, pain, distress, sickness, deaths, murders, and cruelties; as well as joy, beauty, mirth,’ and magnificence. It is an absurd secret, which they have for pleasing their company, fixing their attention, and attaching them to such marvellous relations, by the passions and emotions, which they excite. 

Hume could be commenting on the recently announced Trump NFT “trading cards,” which appear to comprise stolen images from the internet and badly photoshopped heads in a series of bizarre portraits depicting Trump as soldier, rancher, business leader, and even a costumed and be-muscled superhero with lasers shooting from his eyes. I got nothin’ except to say that there is no paradoxical pleasure in viewing this particular horror.

On a more sophisticated level, generative algorithms like MidJourney, DALL-E, and Stable Diffusion are all “trained” by inputting a corpus of human-made creative works, most of which are scraped from the internet without permission of any living artists who still own the rights to the works. As PetaPixel reports, MidJourney founder David Holtz flatly admits feeding his system millions of images without permission, and illustrator Molly Crabapple, in an OpEd for the L.A. Times writes:

While they destroy illustrators’ careers, AI companies are making fortunes. Stability AI, founded by hedge fund manager Emad Mostaque, is valued at $1 billion, and raised an additional $101 million of venture capital in October. Lensa generated $8 million in December alone. Generative AI is another upward transfer of wealth, from working artists to Silicon Valley billionaires.

That these AI “art” generators represent yet another example of economic destruction without the creative part is a certainty. Less certain are some of the copyright questions, for instance, whether input of protected works for “machine learning” is infringement. This will remain a theoretical/ideological debate for attorneys, academics, and copyright nerds like me until one of two things happens:  legislation or litigation, both of which move at a crawl compared to the market for new tech toys. If a lawsuit began tomorrow, for instance, it would be hard to say whether the legal questions presented will still be relevant to the market by the time the case is resolved.

Perhaps the real potential of the generative algorithm lies not with illustration or design or music composition, but with medical diagnostics or some other valuable purpose. If computer science is a true science, then it must allow for unintended discovery, and who’s to say that an experiment in “AI art” cannot be the precursor to an algorithm that helps identify genetic disposition for certain infections?

This does not mean, of course, that we should excuse models in the present that undermine the rights or value of the human artist. On the contrary, I mention this alternate history to emphasize the point that of all the things we can do with computing power, one thing we absolutely do not need are machines that make “art.” Tellingly, Hume’s essay is mostly about art, and to the question whether creative expression about tragedy can provoke a sense of pleasure for the audience, he replies:

This extraordinary effect proceeds from that very eloquence, with which the melancholy scene is represented. The genius required to paint objects in a lively manner, the art employed in collecting all the pathetic circumstances, the judgment displayed in disposing them:  the exercise, I say, of these noble talents, together with the force of expression, and beauty of oratorial numbers, diffuse the highest satisfaction on the audience, and excite the most delightful movements.

Maybe the AI cheerleaders will accuse me of anthropic maximalism, but in addition to doubting that an “AI artist” could ever express anything close to the transcendent experience Hume describes, I am certain that we do not want it to even try. Art is human. There are better uses for computers.


Photo by: Abrill

Bittertweet Symphony

One of my first mantras when I started this blog was I hate Twitter, but that was shorthand for the broader view that social media is a trainwreck. Of course, the existential difficulty presented by these platforms is that while they can be highly toxic, as long as the market remains, one must have a presence if one has a business or anything else to promote. Leaving Twitter or the Meta or Google properties is not an option unless they dwindle to ghost towns. And people keep predicting Twitter is about to do just that, but is it?

Unlike the typically reclusive tech bosses, Elon Musk is all over Twitter all day long. It’s hard to miss his tweets, many of which proclaim to be defending the speech right, including on behalf of the former president, who attempted to overthrow the constitutional order of the Republic. Whether Musk even contemplates that paradox is unknown just as it is unclear whether he believes his own bullshit about the speech right or simply thinks the rhetoric will be good for business. When he complains that an advertiser exercising its speech right is anti-speech, is he really that obtuse, or is he using “speech” as a lever, hoping the market will pressure the advertiser to re-invest in Twitter?

On the other hand, if Zeeshan Aleem writing for MSNBC is correct, Musk is actively willing to lose one market in favor of another. On the subject of reinstating Trump’s account following a poll conducted by Twitter, Aleem writes, “In his presentation of his faux referendum as a win for ‘the people,’ Musk appears to be trying on right-wing populism for size. And it’s only the latest sign that he views Twitter as a platform for advancing his political agenda as he develops increasingly pronounced far-right views.”

If Musk is a right-wing populist in the mode of Trump, then his free speech rhetoric is on target—courting a base that has swapped all comprehension of American civics for a politics of fear, victimhood, and conspiracy mongering. It takes a practiced ignorance to kowtow to a putative authoritarian while arguing that he deserves a platform under the principles of the First Amendment; and I would say that one must be Trump-drunk to so thoroughly misunderstand the speech right, except that isn’t true, is it?

Elon Musk’s stewardship of Twitter is the logical extension of tech-utopianism just as Trump was a natural biproduct of it—because the erroneous defense that everything is free speech fosters that populist fallacy which alleges there are always two or more sides to every story. Not always. Not every story. For instance, Twitter will no longer enforce its COVID misinformation policy. So, when the market or a news editor or a platform rejects or ignores speech that is objectively false, grotesquely insane, or merely offensive, the speaker naturally colors himself a victim of censorship or “cancel culture.”

But as the new CEO of Twitter, Musk appears as a golem made from the dust and mud slung by the Electronic Frontier Foundation, Google, Facebook, Fight for the Future, PublicKnowledge, Techdirt, Reddit, Wikimedia Foundation, and every other organization or Big Tech business who preached the gospel that every tittle and jot posted online is fundamentally speech worthy of protection. Yes, Musk is a particular kind of asshole, but the speech nonsense he coughs up today is indistinguishable from anything the tech-utopian/Silicon Valley crowd have been spewing for twenty years.

From the anti-SOPA campaign to the TPP to the incoherent battle over net neutrality to SESTA/FOSTA to the bananas narrative about Section 230 during the Trump administration, the underlying false premise has been the same—that because social platforms are clearly forums for speech, we cannot distinguish, let alone moderate, speech that is harmful or even illegal in this brave new world. But even though that view waned significantly—and deservedly—after 2016, Musk thinks he’s being clever here:

In 2022, that headline is not remotely controversial. The evidence is in and overwhelming. By first allowing every syllable or image to flow freely and then treating it all as protected speech, internet platforms fueled mobs that bullied speakers—very often women with something to say—into silence. Cyber civil rights experts Danielle Citron and Hany Farid wrote earlier this month in Slate:

In 2009, Twitter banned only spam, impersonation, and copyright violations. Then, the lone safety employee, Del Harvey, recruited one of us (Citron) to write a memo about threats, cyberstalking, and harms suffered by people under assault. Harvey wanted to tackle those harms, but the C-suite resisted in the name of being the ‘free speech wing of the free speech party.’

It took many years and multiple shocks to the political system before certain individuals in Big Tech finally admitted that they had helped build insidious machines while platform operators with the help of “digital rights” groups swept every sin under the rug of free speech. Many of the individuals who finally spoke out were whistleblowers and defectors from Facebook, but Jack Dorsey actively sought to change Twitter. Again, Citron and Farid write:

[In 2015], Jack Dorsey returned as CEO and made trust and safety a priority. This was especially evident after the 2016 election. In response to the disinformation and hate speech that plagued the platform during the election season, Dorsey and Gadde gathered a small kitchen cabinet … to map a path forward to ensure that the platform would enhance public discourse rather than destroy it.

It is no longer news that Musk fired the trust and safety folks at the company and has allegedly reversed about a decade’s worth of initiatives designed to make Twitter safer and more accountable. And it is clear from his tweets that he is doubling down on an experiment in laissez-faire speech absolutism that has already failed. In fact, he wrote this spit-take inducing tweet just a few days ago:

Is he really that naïve? Just a tech bro Ozymandias presiding over a village about to become a wasteland? Or is he an ideologue weaponizing the rhetoric of democracy to soften the ground for another run at authoritarianism? Or maybe he’s just a guy with typically inconsistent views filtered through a billionaire’s ego? Whatever Musk envisions for Twitter—a return to the free-for-all that Dorsey et al started to clean up, or a competitor to Parler—for sure he does not have to lose the whole market in order to lose the whole business.


Hazmat suit photo by: Harbucks