Seuss Nixes Six, Sowing So Many Cli©ks!

In late January, I published a post advocating that we go ahead and cancel some culture. That piece was addressing the subject of platform responsibility, asserting that Facebook et al should feel free to stop amplifying disinformation, hate-mongering, and (unfortunately) sedition and that it should do so without all the dithering about speech rights. There, I asserted that neither Facebook, nor anybody else, needs to apologize for “cancelling” fascism or, more broadly, any illiberal and violent agenda hellbent on ending democracy.

Still, I am loath to use the term “cancel culture” at all. Like other neologisms, it has been sapped of meaning by grumbling Trumpublicans, who make no distinction between, say, deplatforming a white supremacist and a decision in the creative world where authors and stewards of works amend how they express themselves because it may be offensive to the market.

Can the intent to avoid offense go too far? Yes, in my view, it can. I believe, for instance, that it is illiterate to demand only a sanitized version of Huckleberry Finn, or to apply certain sensitivities so aggressively as to mute authors from expressing honest observations about the human condition. (If a writer creates a misogynist character who never utters a sexist remark, the result would be ridiculous.) But such instincts are not the only path to illiteracy. It is also illiterate not to know that certain forms of expression have always been ignorant or hateful—the most obvious of these would be the anthology of Black caricatures in America—and acknowledging this truth in the present is not a “cancellation” of anything. In fact, it’s culturally additive, if you think about it.

Because while there may be pockets of society that would hyperextend the effort to avoid offending anyone (an impossibility), it does not appear that our cultural output comprises the kind of tedious homogeneity one would expect as a result. On the contrary, cultural works are more diverse and complex than ever; and perhaps it is this fact alone that certain “conservatives” find so offensive. If that’s the case, I would point to their tattered and neglected hymnals and suggest they sing a few verses of the Free Market Is Doing Its Job.

But why this sermon? Because the latest bit of news that has a certain brand of conservative frothing in the media was the announcement by Dr. Seuss Enterprises (DSE) that it will discontinue publication of six titles. These are And to Think That I Saw It on Mulberry StreetIf I Ran the Zoo, McElligot’s Pool, On Beyond Zebra!, Scrambled Eggs Super!, and The Cat’s Quizzer

The brief statement by the company declares without equivocation, “These books portray people in ways that are hurtful and wrong.” And in response, various pundits lashed out, blaming “post-modernist, woke, liberals” for wanting to erase or scrub the life out of all past works. And as much as I am willing to roll a jaundiced eye at excessive wokeness, that is only a fragment of the scorn I feel for all the hyperventilating reactions to DSE’s decision—especially the copyright nonsense it set in motion.

Copyright law was dragged into the conversation because, of course, it is copyright that enables DSE to cease the production of new copies of these titles. To be clear, however, it is first and foremost the speech right that safeguards us against coerced speech. Any author/rightsholder may choose to stop making a work available because it has become anachronistic, offensive to the market, unprofitable, or simply because the author has changed his damn mind about what the work says. The right to stop speaking is inherent to the First Amendment, and with published works, that right is enforced through copyright law.

Consequently, in response to DSE’s choice to discontinue these titles, some critics on both the left and right began noising that copyright law should be amended to prevent this sort of thing, although the motives for the prevention are obviously disparate. Culture editor Sonny Bunch, writing for the Washington Post, proposed that if an author/owner no longer wishes to profit from a work associated with offensive content, the work should fall into the public domain. But, as any author or copyright advocate can tell Mr. Bunch, merely divesting from the work financially does not dissociate the brand/author from the expression at issue.

But Can Everyone Please Get a Grip?

What I would say to nearly all parties reacting to this story is to please chill the hell out. Put the half-baked copyright theories back the in drawers and, by all means, stop whinging just because a franchise decides that some of its products are no longer appropriate for the children’s book market. Cultural works come and go. And nothing about the great “celestial jukebox” we call the internet has proven otherwise. On the contrary, one can argue that the short-attention-span reality fostered by social media has erased volumes of cultural literacy across all living generations. In fact, I have made that argument.

There’s a reason why illustrations of Pickaninnies and Sambos are found in museums and archives, but not on busses and billboards. Yes, these images are an unflattering part of the American story, and for that reason alone, they should not be erased from memory. But these images are rightly not part of contemporary culture because they are offensive and ignorant and anathema to peace and prosperity. Works come and go. And that’s fine.

Ever read The Castaway? Me either. It was a controversial (i.e. presumably racist) novel about the Civil War published in 1904, and it happened to be the subject of the lawsuit that gave us the first sale doctrine in copyright law. First sale is what allows you to sell or dispose of your copy of a work however you choose. And guess what? DSE’s right to stop making new copies of And to Think That I Saw It on Mulberry Street (1964) does nothing to prevent what may be a few million copies from existing as artifacts for collectors and, eventually perhaps, for archives and museums. Meanwhile, copies of the discontinued six are already selling for a small fortune on eBay and elsewhere. Thank you, first sale doctrine.

If your personal view is that nothing in the Seuss books is nearly so offensive as the Black caricatures I mentioned above, I would be inclined to agree, but that is entirely beside the point. Offense is in the eye of the beholder. And both the speech right and copyright law grant that judgment call to the rightsholder of the work. As a matter of business, DSE has every right to discontinue products it deems bad for the brand and to protect the market for the rest of the franchise. How anyone calling himself a Republican could argue with that is a mystery. But we live in strange and preposterous times.

Meanwhile, copyright law does not need amending to address a problem that does not exist. Authors and their assigns have the right to express themselves and decide whom they are willing to risk offending. And the market has a right to respond. Doubtless, there are hardline conservatives who consider The Lorax a work of liberal, tree-hugging indoctrination. And those people are free to shun the book or even write a parody extolling the economic value of Thneed production.* But otherwise, I really think everyone should chill the hell out.  


*I do not subscribe to this view; I still agree there is no need for Thneeds.

See also: Is It Fair Use to Reproduce Out-of-Print Seuss? by Aaron Moss

Yes, Let’s Cancel Some “Culture”

In January 2017, after far-right extremist Richard Spencer was attacked on Inauguration Day, a semi-rhetorical question began trending on social media. Is it okay to punch the Nazi? While I would tend to say that it is rarely ethical to throw the first punch at anyone, can we at least agree that it is not only fair, but morally imperative, to tell the Nazi to fuck off?

It strikes me that there are two conversations occurring on the subject of “cancel culture,” though it should really be one declaration and one conversation. The declaration should be directed at those Americans, whether they are ordinary citizens or Members of Congress, who have decided that “conservative” is now synonymous with religious or ethnic nationalism, or just plain crazy-ass nonsense like QAnon. And the declaration is simple enough: No, you are not owed a conversation, a seat at the table, a platform, or even basic courtesy because your views are well-known predicates to fascism. Take it from Serbian immigrant and author Aleksander Hemon, writing in 2018 about why he laments the deference he once showed to his best friend, as he watched that friend become consumed by nationalism until he ultimately rationalized genocide:

My relationship with the war has always been marked by an intense sense that I failed to see what was coming, even though everything I needed to know was there, before my very eyes. While Zoka took active part in enacting the ideas I’d argued against, my agency did not go beyond putting light pressure on his fascist views by way of screaming. I have felt guilty, in other words, for doing little, for extending my dialogue with him (and a few other Serb nationalist friends) for far too long, even while his positions—all of them easy to trace back to base Serbian propaganda—were being actualized in a criminal and bloody operation. 

Embed from Getty Images

The lessons of history are clear. It is not only permissible to shut down fascist propaganda, it is essential. Trumpism and its overt appeals to white nationalism and rank thuggery is an existential threat to the nation, no matter what happens next to Donald Trump himself. And the immediacy of that threat has helped write the latest chapter in the conversation about the internet and its capacity to radicalize people to the point of engaging in domestic terrorism. Because now that the immediate danger has passed, and the Facebook Oversight Board gathers to decide whether Trump gets back on that platform, the “digital rights” organizations appear to be rehashing false dichotomies when addressing the challenge at hand.

For instance, the EFF, similar organizations, and Facebook’s Oversight Board all seem to acknowledge that deplatforming Donald Trump was a critically necessary response to the insurrection of January 6. But since Biden’s peaceful inauguration, they have reprised the broad, frankly rhetorical, question that asks, Do we want Facebook and Twitter to wield so much power and to be the arbiters of truth? No, we do not want that. But it doesn’t matter because that’s the wrong question. Facebook Twitter, Google et al are not the arbiters of truth—especially not with regard to countless examples in which truth is anything but arbitrary.

There were not two sides when the former president advocated the medical advice of a witch doctor. There are not two sides to the allegations of consequential fraud in the 2020 election. And there are not two sides to the belief that a conspiracy of pedophile cannibals is running the world. The list of examples, sadly, goes on for miles; but the point is that in many instances of consequence, the social sites do not need to be arbiters of truth. Site managers can use the same resources—experts, professional journalists, courts, and common sense—that the rest of us use to know what is true, and which lies (e.g. all of the above) can be very dangerous.

Why Can’t AI Assist Ordinary Reasoning?

What we should want the major social sites to do is not judge truth, but rather to employ their considerable computing power to identify when momentum is building around narratives that have the capacity to foster acts of tremendous harm. And, by the way, making that determination is not necessarily the job of a bunch of computer programmers or sage academics, and perhaps we should simply get comfortable with Facebook et al notifying the FBI. That said, what does the tipping point look like to site managers? What clues would alert them to the possibility that a page may be transitioning from a forum for political opinions (even rancorous opinions) into a petri dish growing new domestic terrorists? The answer is not uncharted territory: it begins with that word narrative.

When this blog launched, I did a podcast interview with Christopher Dickey, who passed away in July 2020 after a long career as an international journalist, author, and expert on terrorism and extremism. In a subsequent post, I cited Dickey’s observation that there are the three ingredients found in most acts of terrorism—Testosterone, Narrative, and Theater—TNT. Narrative, he defined broadly as a “belief that one is righting some great wrong.” And I would argue that the animating word in that definition is belief. Righting wrongs can be a virtue, though not usually by violence, and never in cases when the alleged wrong does not exist—like an election that was not stolen or pedophile cannibals who are not running the government.

So, can social media managers, with the help of their all-knowing AI, determine when a false narrative (e.g. on a group page) is metastasizing into a movement, and then assess whether that movement is approaching a threshold toward dangerous action? Conversely, if the answer to that question is yes, can the social media managers also determine when chatter is relatively benign, even if it may be generally divorced from reality? Probably. Because metrics exist.

If Facebook, Google et al can influence a market decision, it seems highly likely that they can identify extremist tipping points because certain criteria (like Dickey’s TNT) will likely be present every time. For instance, I would propose the metrics virality, latent toxicity, and kinetic toxicity as three starting metrics. The first, virality, is something these companies measure all day long, and assessing relative significance is not a difficult logical leap. For example, if fifty people opine in a handful of threads that vaccines cause autism, that is not nearly so significant a measure of virality as five-million people repeating this nonsense across multiple pages.

The second metric assesses the latent toxicity of a viral narrative, which is not simply a matter of volume. Five-million adults who believe that vaccines cause autism has high toxicity, whereas thirty-million adults who believe in ghosts has low toxicity. But this assessment is also influenced by the third metric which assesses kinetic toxicity. If the action taken by the five-million antivaxxers is to shun vaccines and, thereby, force society to risk the return of polio, that action has very high toxicity. On the other hand, if half of the thirty-million ghost believers want to go specter hunting on their next vacations, that action has very low toxicity.

But, as we see happen all the time, if a splinter group of say 5,000 ghost enthusiasts coalesces around a new narrative, perhaps originating on 8Chan, that evil poltergeists are running America’s public transportation systems, this subgroup has just increased its latent toxicity based on the original narrative. At this point, the social media managers have reason to comb the splinter group’s page for kinetic toxicity, assessing whether the group is beginning to advocate, for instance, an assault on city busses and subways in order to purge the evil spirits from the system.

Nothing I just hypothesized is one bit loonier than the multiple narratives that collided at the Capitol on January 6. And none of the metrics I propose (name them or amend them however you like) is beyond the capacity of Facebook, Twitter, et al to measure and assess. The question is not whether taking such an approach is a civil liberties issue; these companies use these kinds of data all day long for their own pecuniary interests. The question is whether these companies have the moral integrity to risk losing market share by removing (or reporting) extremism, even when that extremism emanates from the highest levels of government.

Of course, it is beyond even the hubris of Zuckerberg to tackle America’s existential crisis of the moment, when it is clear that tens of millions of our citizens either do not know or do not care that the former president and members of their party committed sedition. Facebook and friends cannot solve that, but they can help mitigate galloping disinformation and nascent fascism. And they should look to their analog forebears for guidance. Returning to that same article by Aleksander Hemon, he responds to a moment when The New Yorker‘s editors first invited Steve Bannon to a discussion and then rescinded the invite, which was then called censorship by various parties. Hemon’s insight is relevant to the social platforms, if they choose to listen:

The error in Bannon’s headlining The New Yorker Festival would not have been in giving him a platform to spew his hateful rhetoric, for he was as likely to convert anyone as he himself was to be shown the light in conversation with Remnick. The catastrophic error would’ve been in allowing him to divorce his ideas from the fascist practices in which they’re actualized with brutality. If he is at all relevant, it is not as a thinker, but as a (former) executive who has worked to build the Trumpist edifice of power that cages children and is dismantling mechanisms of democracy.

Divorcing ideas from practice may be one of the most accurate expressions ever written to describe the fallacy underlying nearly all platform governance, or lack of governance, to date. And the folly needs to end now that we have seen some of the worst evidence imaginable that online madness, like QAnon, is not merely inert speech. The United States is a very delicate idea. And we have no reason to equivocate when rejecting ideas—least of all wild conspiracy theories or old ideas grounded in doctrines of cruelty—that are fatally incompatible with the nation’s existence. Fascism is the consequence of all forms of fundamentalism, and genocide is the aim of all forms of fascism.  So, yes, we must cancel that before it cancels us all. To that end, certain voices do not deserve a platform. And no apology is owed for telling them to fuck off.

Photo by: mikdam