Bittertweet Symphony

One of my first mantras when I started this blog was I hate Twitter, but that was shorthand for the broader view that social media is a trainwreck. Of course, the existential difficulty presented by these platforms is that while they can be highly toxic, as long as the market remains, one must have a presence if one has a business or anything else to promote. Leaving Twitter or the Meta or Google properties is not an option unless they dwindle to ghost towns. And people keep predicting Twitter is about to do just that, but is it?

Unlike the typically reclusive tech bosses, Elon Musk is all over Twitter all day long. It’s hard to miss his tweets, many of which proclaim to be defending the speech right, including on behalf of the former president, who attempted to overthrow the constitutional order of the Republic. Whether Musk even contemplates that paradox is unknown just as it is unclear whether he believes his own bullshit about the speech right or simply thinks the rhetoric will be good for business. When he complains that an advertiser exercising its speech right is anti-speech, is he really that obtuse, or is he using “speech” as a lever, hoping the market will pressure the advertiser to re-invest in Twitter?

On the other hand, if Zeeshan Aleem writing for MSNBC is correct, Musk is actively willing to lose one market in favor of another. On the subject of reinstating Trump’s account following a poll conducted by Twitter, Aleem writes, “In his presentation of his faux referendum as a win for ‘the people,’ Musk appears to be trying on right-wing populism for size. And it’s only the latest sign that he views Twitter as a platform for advancing his political agenda as he develops increasingly pronounced far-right views.”

If Musk is a right-wing populist in the mode of Trump, then his free speech rhetoric is on target—courting a base that has swapped all comprehension of American civics for a politics of fear, victimhood, and conspiracy mongering. It takes a practiced ignorance to kowtow to a putative authoritarian while arguing that he deserves a platform under the principles of the First Amendment; and I would say that one must be Trump-drunk to so thoroughly misunderstand the speech right, except that isn’t true, is it?

Elon Musk’s stewardship of Twitter is the logical extension of tech-utopianism just as Trump was a natural biproduct of it—because the erroneous defense that everything is free speech fosters that populist fallacy which alleges there are always two or more sides to every story. Not always. Not every story. For instance, Twitter will no longer enforce its COVID misinformation policy. So, when the market or a news editor or a platform rejects or ignores speech that is objectively false, grotesquely insane, or merely offensive, the speaker naturally colors himself a victim of censorship or “cancel culture.”

But as the new CEO of Twitter, Musk appears as a golem made from the dust and mud slung by the Electronic Frontier Foundation, Google, Facebook, Fight for the Future, PublicKnowledge, Techdirt, Reddit, Wikimedia Foundation, and every other organization or Big Tech business who preached the gospel that every tittle and jot posted online is fundamentally speech worthy of protection. Yes, Musk is a particular kind of asshole, but the speech nonsense he coughs up today is indistinguishable from anything the tech-utopian/Silicon Valley crowd have been spewing for twenty years.

From the anti-SOPA campaign to the TPP to the incoherent battle over net neutrality to SESTA/FOSTA to the bananas narrative about Section 230 during the Trump administration, the underlying false premise has been the same—that because social platforms are clearly forums for speech, we cannot distinguish, let alone moderate, speech that is harmful or even illegal in this brave new world. But even though that view waned significantly—and deservedly—after 2016, Musk thinks he’s being clever here:

In 2022, that headline is not remotely controversial. The evidence is in and overwhelming. By first allowing every syllable or image to flow freely and then treating it all as protected speech, internet platforms fueled mobs that bullied speakers—very often women with something to say—into silence. Cyber civil rights experts Danielle Citron and Hany Farid wrote earlier this month in Slate:

In 2009, Twitter banned only spam, impersonation, and copyright violations. Then, the lone safety employee, Del Harvey, recruited one of us (Citron) to write a memo about threats, cyberstalking, and harms suffered by people under assault. Harvey wanted to tackle those harms, but the C-suite resisted in the name of being the ‘free speech wing of the free speech party.’

It took many years and multiple shocks to the political system before certain individuals in Big Tech finally admitted that they had helped build insidious machines while platform operators with the help of “digital rights” groups swept every sin under the rug of free speech. Many of the individuals who finally spoke out were whistleblowers and defectors from Facebook, but Jack Dorsey actively sought to change Twitter. Again, Citron and Farid write:

[In 2015], Jack Dorsey returned as CEO and made trust and safety a priority. This was especially evident after the 2016 election. In response to the disinformation and hate speech that plagued the platform during the election season, Dorsey and Gadde gathered a small kitchen cabinet … to map a path forward to ensure that the platform would enhance public discourse rather than destroy it.

It is no longer news that Musk fired the trust and safety folks at the company and has allegedly reversed about a decade’s worth of initiatives designed to make Twitter safer and more accountable. And it is clear from his tweets that he is doubling down on an experiment in laissez-faire speech absolutism that has already failed. In fact, he wrote this spit-take inducing tweet just a few days ago:

Is he really that naïve? Just a tech bro Ozymandias presiding over a village about to become a wasteland? Or is he an ideologue weaponizing the rhetoric of democracy to soften the ground for another run at authoritarianism? Or maybe he’s just a guy with typically inconsistent views filtered through a billionaire’s ego? Whatever Musk envisions for Twitter—a return to the free-for-all that Dorsey et al started to clean up, or a competitor to Parler—for sure he does not have to lose the whole market in order to lose the whole business.


Hazmat suit photo by: Harbucks

Might As Well Be Bots

So, I don’t engage very often via Twitter, but once in a while, I respond to something that catches my attention and then usually regret spending time responding to the responses.  Last week, I noticed that Pirate Party MEP Julia Reda—the face, voice, and tweetdeck of anti-Article 13 activism in the EU—posted an odd tweet, and I replied … 

Because, of course, even if Tumblr’s efforts to use AI to identify pornography are a) accurately reported; and b) laughably ineffective, it is misleading for Reda to suggest that this folly is particularly instructive to the purpose or eventual function of Article 13.  What she means to imply, of course, is that any comparable technology, which may be used to identify content that allegedly infringes copyright on a large platform like YouTube, will result in the same kind of errors that are reportedly happening on Tumblr.  

I consider this tweet to be scare-mongering for many reasons, but here are three simple ones:  1) existing technologies currently in use for identifying copyrighted material is already better than whatever is being described in the Tumblr/porn example; 2) it is needlessly defeatist to say that these technologies can never be improved and/or supported by human oversight to alleviate error; and 3) if any copyright ID system is too aggressive and error-prone, the rights holders advocating Article 13 aren’t going to like it either.  All of which leads me to conclude that calming down is more rational than, once again, declaring that the internet needs to be “saved.”

So, Reda and I had a brief exchange in the service of nothing (i.e. the reason I don’t like the forum), and went our merry ways.  But I did notice that at least one of the “users” who liked and retweeted one of Reda’s responses to me looked an awful lot like a bot.  The account was a handle and an avatar, it had just a few followers, and its tweetdeck was almost exclusively about the evils of Article 13.  So, while scrolling that thread and wondering whether the account might be a bot programmed to RT anti-Article 13 stuff, I saw this tweet posted by a different account with zero followers…

And this prompted a new thought.  What difference does it make if a tweet like the above is posted by a bot or a real person?  Because if social media platforms like Twitter train real people to respond with pavlovian certainty to any given issue, they might as well be bots.  Either this individual simply doesn’t know that the platforms he says will be “destroyed” are the biggest of big corporations; or he is so well trained to respond to certain signals, that he’ll just remain blissfully unaware of his own cognitive dissonance.  Or he’s a bot.  

Either way, same result.  Some other bot, or mindless person, or ten-year-old child repeats the unfounded assertion that, for instance, the legislative language is “vague,” and boom—it’s now a fact.  Why would anyone take some anonymous tweet at face value which claims that a body of legislative language—in any area of law—is vague?  Because they want to believe it, and the desire to perpetuate that narrative is sustained by knowing diddly squat about the legislative language itself.

When the hyperventilating saga that was the anti-SOPA campaign peaked in early 2012, the internet giants still enjoyed a general benefit of the doubt that they had built platforms that were truly making global democracy work better.  (And that they had built these platforms out of the goodness of their hearts!)  So, all those anti-SOPA headlines warning people not to let anyone “break the internet” were understandably hard to counter with any kind of cool reason.

Today, though, it is curious—if not a little bit frightening—that even after the fallout from stories like Cambridge Analytica, the “Save Your Internet” battle cry is still effective in the current opposition to Article 13.  It is, after all, a reprise of the same digital dirge that was so effective seven years ago; but surely, this general call to arms cannot jibe with what we’ve learned over the past two years about major platforms and a number of paradigms not worth saving.

In 2011, I wondered how many non-constituents were marshaled to stop American legislation (SOPA/PIPA) in its tracks.  How many foreign citizens? How many children?  How many bots?  Because, as David Lowery has detailed in a multi-part post, the methods employed by Big Tech to sway public policy may be one of a handful of legitimate threats to democracies around the world.  And on this topic as to who—or what—is being rallied to action, TorrentFreak published a new post that misses, or purposely obfuscates, a very important distinction.  Andy calls hypocrisy on the IFPI for criticizing Julia Reda’s shout-out to children in this tweet:  

Andy compares Reda’s appeal to children to the fact that major copyright interests have often launched initiatives to educate kids about copyright and piracy.  “…it’s pretty ironic that IFPI has called out Reda for informing kids about copyright law to further the aims of ‘big tech companies’. As we all know, the music and movie industries have been happily doing exactly the same to further their own aims for at least ten years and probably more,” he writes.   

But the differences between Reda’s targeting kids on social media and the kind of initiatives Andy refers to are substantial and significant.  At a very basic level, educating children about how copyright works may be offensive to the pirates out there, but copyright has been part of the legal fabric of Europe and the U.S. for a couple of centuries, so it’s not exactly propagandist to explain its function in age-appropriate ways to groups of schoolchildren.  And given the fact that plagiarism can end someone’s college or university career, the foundations of copyright are in no way anathema to general education.  Outreach to schools on the subject of piracy and copyright tend to include the following lessons or discussions:

  1. education about the skilled people who make the movies, music, etc. people enjoy.
  2. a message that taking things without paying for them is both illegal and wrong.
  3. a message that respecting creators falls under the principle of the golden rule.

While these themes may be antithetical to pirate rationalizations for mass infringement, they’re not exactly outliers to the fundamentals that most people try to teach their kids. (Substitute creator for farmer, and the discussion will be very similar.)  In contrast to the implications of Reda’s tweet, major rights holders don’t generally engage children via Twitter to take direct action aimed at promoting or stopping specific legislation.

Though nobody can doubt that when organizations like movie studios fund education programs in response to piracy, the effort is industry-serving, but those types of broad initiatives do not compare to an elected official addressing teens and tweens on social medial and telling them (untruthfully) that YouTube won’t work anymore because of a policy those kids are not going to understand. And that fairly well sums up what I think about all of this—that the so-called defenders of “the internet,” who appeal to democratic principles in that effort, consistently demonstrate exactly why “the internet” isn’t worth defending.  


Photo source by davincidig

Platform Responsibility? How about starting with legal content?

It may be hip these day to talk about platform responsibility, but just a couple years ago, there were no mainstream conversations about how the operations and policies of online service providers might be enabling misinformation, hate speech, propaganda, etc. And while mea culpas from Facebook’s Mark Zuckerberg and Twitter’s Jack Dorsey make headlines, and Google tries to pitch the general message that “we’re all in this together,” my more cynical self wonders whether these service providers are just waiting out the news cycle. Waiting until we grow weary of this new discussion, which just happens to be focused on some of the most difficult (if not intractable) questions, like where to draw lines on protected speech.

As alluded to in this post, it is my personal theory that if the major service providers do not change their policies, practices, and rhetoric with regard to illegal content—or support of illegal content—then all this chatter about finding balance in the realm of protected speech is just pandering noise that will soon die down. I do not doubt that Zuckerberg, Dorsey, et al feel personally conflicted about the role their platforms have played in elevating rank divisiveness into the mainstream of political discourse; but when these guys, and other representatives of OSPs say things like “We have to do better,” I can’t help but think of the litany of cases in which internet companies have fought against complying with established legal principles at every turn.

I think of Google fighting a Canadian Supreme Court order in Equustek v. Google to delist links to a counterfeit product supplier. Or Yelp in Hassell v. Bird refusing to remove a review that a court held to be libelous. Or the fact pattern in BMG v. Cox Communications which revealed a systemic policy whereby the OSP avoided compliance with the terms of the DMCA. Or even Viacom v. YouTube, which, though settled without trial, revealed a similar fact pattern of knowingly enabling users to infringe copyrights. Or one of my favorite moments in internet hubris: Reddit’s hand-wringing, apologetic rationale for removing a subreddit that was hosting stolen nude photos of celebrities, who happened to be victims of a hacker.

Not one of the cases alluded to above involves protected speech, yet the responses have all been variations on the same theme: that removing anything from the web can only be a slippery slope toward “censorship.” And despite the fact that these, and other examples, generally entail unprotected, illegal content, we are now suddenly expecting the OSPs to grapple with the more complicated matter of monitoring legal speech and to do…something…as a matter of principle. Don’t get me wrong. A change in attitude would be welcome in so many ways. But if the major platforms cannot first amend their practices with regard to illegal material, I am highly doubtful they will come anywhere near striking the balance that everyone who is now having the “responsibility” conversation says is so essential.

In a panel discussion about platform responsibility hosted yesterday by the Technology Policy Institute, Daphne Keller of the Center for Internet and Society said that she “did not want to return to the copyright wars” in context to the discussion now being had. That’s her prerogative, of course, but copyright infringement is probably the vanguard issue that is most instructive to this moment of internal and external consideration of what platform responsibility actually means. Two decades worth of policies adopted by the major OSPs to first profit from copyright infringement and then seek to reshape copyright law itself in the courts, in academia, and in the public sphere reveal the sense of “responsibility” these companies have felt toward the people they have been exploiting. And of course when the exploited complained they were told they were wrong—that they did not understand the future.

In fact, in yesterday’s panel, I believe it was Keller who alluded to the “false dichotomy” that pits technology against rightholders, but let us not forget the origin of that bullshit narrative. Because it didn’t come from the rightholders. Shall we do a search for all the editorials posted by Techdirt, by EFF, by Lessig and Lefsetz—by copyright critics large and small—who have labeled creative rightholders as technology Luddites “clinging to old models”? That’s not the copyright owner’s narrative, it’s Big Tech’s narrative. So, if there is a false dichotomy, which now demands clarification, it ought to be recanted by the liars who wrote it and are still repeating it. That would be taking responsibility.

Interestingly enough, as a former Associate General Counsel for Google, Keller worked on the aforementioned Equustek case, and in June of 2017, she wrote a blog post for CIS in which she labeled the Canadian Court order that Google remove search results globally as an “ominous” proposal. In simple terms, this was a case in which a counterfeit business infringed Equustek’s trade secrets and then sold knock-off products via multiple sites on the web. Equustek sought and won a court order to remove the counterfeiter’s sites globally from Goolge’s search results.

I cite this example because it is comparatively straightforward. The legit company deserves the business earned by its products; consumers deserve to know what they’re buying and from whom; and there is no speech protection for trade in counterfeit goods. Equustek is also instructive because there is a clear parallel between its prayer for injunctive relief and, say, the motion picture industry’s efforts to have Google delist or demote major pirate sites, which are also not protected speech. Yet, in her 2017 post, Keller sums up the “ominous” nature of the Canadian Court order thus:

“Canada’s endorsement of cross-border content removal orders is deeply troubling. It speeds the day when we will see the same kinds of orders from countries with problematic human rights records and oppressive speech laws. And it increases any individual speaker’s vulnerability to laws and state actors elsewhere in the world. Content hosting and distribution are increasingly centralized in the hands of a few multinational companies – Google, Facebook, Apple, Amazon and Microsoft with their web hosting services, etc. Those companies have local presence and vulnerability to formal jurisdiction and real world threats of arrest or asset seizure in scores of countries.”

Apropos that first sentence, Keller asks rhetorically in the same post, “Can Russia use its anti-gay laws to make search results unavailable to Canadians?” I have two responses to this: the first is No, because the hypothetical, Russian court order would violate both Canadian and American law, which is not the case in Canada’s order to Google in Equustek. Keller, who is really citing Canada’s Michael Geist, falsely alleges that the defendant in Equustek is disseminating protected “speech and information,” which is not the case because the content is infringing and misleading in a manner that could be construed as fraudulent.

My second response is to mention that the policy view Keller seems to advocate—that the rule of law just doesn’t work in cyberspace—is exactly how we arrived at the moment in history when the Russian government is in fact exporting its agenda to the U.S. by using our own speech rights against us on social media. The Geist/Keller example of the Russian court order is pure hypothetical hysteria, but the phenomenon in which paid Russian hackers are fomenting anti-gay, and other hateful sentiments, to ratchet up divisiveness in the U.S. is a verified reality. I happen to think this makes pretty compelling evidence that the rule of lawlessness in cyberspace hasn’t worked out so well, but perhaps that’s just my inner Luddite talking.

So, although the topic of platform responsibility may be trending right now, I maintain some doubt that the OSPs can, or even should, try to protect society against the social and political effects of problematic information. That topic may be what sparked the conversation, but the complexity of that challenge, as it is currently framed, may wind up allowing the service providers to revert to the status quo, in which they moderate almost nothing and monetize almost everything.

Instead, taking on the less-challenging task of actually mitigating illegal content—copyright infringement, harassment, counterfeiting, trafficking, libel, etc.—does not require platform administrators to wade into the murky complexities of moderating speech. So, if they really mean it when they say, “We have to do better,” they can certainly start by complying with reasonable court orders and working with—rather than against—key stakeholders seeking a more lawful internet ecosystem.


Photo by David Crockett