Mach (digital) Tuck?

Not long after I wrote a post suggesting there is little difference between naive human engagement and bot engagement on policy issues, a couple of things happened.  One was the publication of a story by Max Read in New York Magazine reporting that a substantial (though hardly surprising) amount of material and people on the internet are fake.  The other thing was a recent get-together with my best friend of 30 years, who told me that his eighth-grader son said he was worried that “Article 13 is going to destroy YouTube.”  Perhaps most tellingly, the kid complained to his father, “The [legislative] language is too vague!”

Now, I love my friend’s kids as I love my own, and I hope they and their contemporaries become engaged citizens as they grow up; but none of them is yet qualified to have an informed opinion about the precision, or lack thereof, in the language of bills proposed at the European Parliament or any other legislative body.  The broad irony here is hard to miss:  In contemporary, digitally-distracted America, we can only hope the next generation learns about the constitutional separation of powers and other rudimentary civics, while YouTube videos frighten them about “hazards” lurking in arcane proposals they cannot possibly have the background to understand.

This is no way to run a liberal democracy, yet my friend’s son in this scenario, to no fault of his own, has been turned into a bot.  Fortunately, his father doesn’t generally take the policy views of his children at face value, but you can bet that more than few parents are apt to be less attentive when it comes to some obscure (let alone European) bit of cyber-policy.  Their kids announce,  “Hey, XYZ will destroy the internet,” and the next day, a meme with the same message just happens to appear on Facebook, Mom or Dad shares it, and civilization is destroyed one guileless click at a time.  

Of course, it isn’t one click at a time.  It’s millions of clicks that, apropos the aforementioned New York article, do not even represent real people in many cases.  In fact, Max Read states, “Studies generally suggest that, year after year, less than 60 percent of web traffic is human; some years, according to some researchers, a healthy majority of it is bot.”  

If that statistic is off by even a wide margin, that is still a hell of a lot of bot, which leads me to the following question:  If we combine total bot + humans acting like bots + hackers & other disinformation brokers, how close are we to approaching maximum inescapable bullshit?  We might also add just plain bad reporting under brand names like this piece on forbes.com about Article 13, which refers to the non-existent “FAIR USE Act” — an error gross enough to disqualify the author from commenting on copyright law at all.  But here we are.

It was the same best friend mentioned above who, when we were college freshmen, taught me the aeronautical term mach tuck.  Put simply, this is a hazardous condition that occurs, usually in subsonic aircraft traveling faster than they’re supposed to, when the airflow over the wing nears, or even exceeds, Mach 1.  This causes the nose of the plane to pitch downward, and in an aircraft not designed to fly at or above the speed of sound, correction may not be possible.  Thus, the downward pitch naturally causes the plane to accelerate, which exacerbates the problem by increasing the windspeed over the top of the wing.  This vicious cycle called mach tuck can plunge the aircraft into an unrecoverable dive.  

Obviously, I took that aerodynamic detour to ask whether the “information age” has achieved—or could soon achieve—mach tuck?  Is the flow of garbage streaming so quickly, and accelerated by forces beyond our control, that we are about to enter—or are already in—an information nose dive from which we cannot recover?   

Optimistically, I don’t think so.  While it is not entirely clear that the pros of digital engagement sufficiently outweigh the cons, the pros are still evident and abundant.  No one can, or should, doubt that there is a wealth of useful, credentialed, instantly-accessible information online.  The challenge, though, is that the major social platforms—which happen to be owned by the wealthiest and most politically-influential corporations—distort information through a mosaic of visual stimuli that probably overpower critical thinking.  Assuming that’s true, things may be about to get worse before they get better.

I suspect 2019 will be the year a lot more people hear about “deepfakes,” a software developed by Google, which enables even a modestly-skilled individual to effectively “skin” any face onto any body in a video clip.  Both “deepfakes” and the more elaborate CGI capabilities of motion picture suppliers have raised new questions about publicity and a celebrities’ rights to control the use of their likenesses in mainstream fare.  And, yes, not at all surprisingly, the likenesses of popular actresses have indeed been grafted into pornographic scenes in which they did not appear.  

But while these issues for mainstream performing artists are debated as a matter of policy and contract law in the coming year, stay tuned for information warfare to get a lot uglier through the use of these technologies.  New darling Democrat Alexandria Ocasio-Cortez was throwing molotov cocktails at nuns?  Of course she was, here’s the video.  Is that Rep. Doug Collins giving a speech at a Nazi rally?  Roll that clip.  And all of it hosted on YouTube which doesn’t have to remove any of it because…free speech?  

The old cliché that “seeing is believing” has always been a duplicitous axiom.  Just about any major critic who has ever written about photography will tell you that seeing may result in believing but that this should not be confused with seeing the truth.  “…the camera’s rendering of reality must always hide more than it discloses,” wrote Susan Sontag in 1973, long before the technological capacity for amateurs to create realistic, moving pictures, depicting real people in scenes for which they were never present.

Of course, if fake video clips like I describe are deployed en masse, there is the possibility that this could trigger a healthy skepticism for believing what we see.  Presumably, this will depend on the degree of subtlety employed by the manipulators, and it is worth noting that the hackers at the St. Petersburg-based Internet Research Agency have been described as both subtle and sophisticated in their use of disinformation on social media. 

Of course, if rampant fake video were to induce new skepticism, this implies a potential  new hazard—that we no longer believe what is, in fact, true (i.e. mach tuck).   These are the kind of challenges that companies like Google should be helping to address instead of spending their vast resources to scare the kids about comparatively modest proposals like Article 13 in the EU Single Market Directive. 

The irony that these companies invoke free speech in their efforts to protect their own revenue could not be more pellucid as their platforms and policies literally help to unravel the very reason speech was a protected civil right in the first place—a hope among a handful of 18th century idealists that the electorate, while always debating what should be, might at least find common ground in what is.

Why Do We Share Fake News?

The underlying premise of this blog—indeed its title—is a rejection of the tech-utopian pursuit of more as a virtue unto itself.  It is true that the presumed benefit of more access to more content happens to be one of the commonly-alleged rationales for mass copyright infringement, but the destructive power of more goes far beyond the interests of authors of creative works. And we’re watching this destruction happen in real time.

That the current President of the United States can get away with labeling news he does not like as “fake news” is one consequence of our misguided faith in more—arguably the most prominent and acutely-negative result of information democratization. By contrast, the very subtle moment that inspired this blog was the day a friend of mine—well-educated and liberal—shared a story in 2011 that I knew to be false.  When I pointed out the inaccuracy, he countered that he cared more about the point of view being advocated than the legitimacy of the article.  Then, when I discovered how many places this same article had been re-published online, the name The Illusion of More became a thing.

But why do people share fake news? Why was my otherwise-reasonable friend unwilling to temper his eagerness to share a story that was simply untrue? “When someone chooses to share a fake news story on Facebook, Twitter, via text message, or on Whatsapp; when they post a conservative meme to their wall; or when they ‘like’ a YouTube video about a pro-Trump conspiracy theory, they may well be doing it to signal their identity and affiliate themselves with like-minded others,” writes Alice E. Marwick in a new academic paper titled Why Do People Share Fake News? A Sociotechnical Model of Media Effects.

An assistant professor in the Department of Communication at the University of North Carolina, Marwick does not fully answer her titular question, acknowledging that she and her colleagues are far from those conclusions. Instead, she describes the complexity of the fake news problem, recommending avenues for further research and a language for more accurately discussing the issues.

The term “fake news” itself is simultaneously too broad and too narrow,” says Marwick, who advocates the more general term problematic information to encompass the complex universe of “hoaxes, memes, YouTube videos, conspiracy theories, and hyper-partisan news sites,” which all contribute in different ways to the fun-house mirror version of contemporary society we see via social media. At the same time, she describes political news as “one ingredient in a bouillabaisse of photographs, personal stories, advertisements, movie trailers, celebrity gossip, sports news…,” asserting that “In social spaces, the traditional journalistic value of objectivity no longer makes sense: virtually every story is augmented with someone’s opinion.”

The literal meaning of “fake news” is typically an enterprise in which the creator of a spoof has no agenda other than to chum the waters of social media with click-bait in order to generate ad revenue. Often, these “stories” are polysemous, says Marwick, meaning they can be interpreted in divergent ways and, therefore, shared for opposing reasons. She writes the following about one of these false stories: “‘White Baseball Players Kneel in the 50’s [sic] to Protest Black Lynchings,’ could be interpreted in support of NFL player Colin Kaepernick’s position on Black Lives Matter, or it could be a refutation of the history of White racism.”

The polysemy of this untrue story might mean more revenue for the fake news-maker, but it certainly means more reinforcement of competing, phantom narratives driving potentially reasonable citizens further apart.  Or if the story was created by a malicious actor, like a Russian agent, then the division it sows is the intent of the spoof. Regardless, the lack of truth in the story does not stop its being shared by people for divergent reasons, and Marwick wants to better understand why this is the case.

Opening the Overton Window

At present, Marwick notes that the data does reveal that Republicans are swimming in a larger pond of problematic information than Democrats, but there is “still a plethora of false content that appeals to people with left-wing sensibilities.” By democratizing news (meaning anybody gets to produce it), we have widened the Overton window, “the range of political viewpoints that are socially acceptable in American society,” thereby fostering what Marwick describes as an often subtle correspondence between problematic information online and more mainstream outlets that will encode extremist views into moderate sounding reportage or messaging.

A good example of this occurred recently in my congressional district in Upstate New York. The white, Republican incumbent employed a rhetorical attack on his Black, Democratic challenger that likely would not have been attempted as recently as two years ago. Although the Democratic candidate is a pro-business attorney and Rhodes Scholar (qualities that might normally invite labels like “elitist”), the fact that he briefly dabbled in rap music early in his career inspired the incumbent Republican to assert that a “former rapper does not represent our rural values.”

The coded “former rapper” standing in for “Black man” cannot be seen as simply a consequence of Trumpism because Trump’s presidency itself is a manifestation of our having thrown open that Overton window long before he announced his candidacy. Instead, Marwick would likely identify the Republican’s rhetorical strategy as tapping into a “deep story,” in which the interests of rural Americans have allegedly been moved to the “back of the line” behind immigrants, refugees, people of color, etc. at the urging of liberal urbanites.

While this particular narrative may be grounded in the fact that, indeed, city-centric politics often do overlook the interests of rural citizens, the crazy, racist, and divisive aspects of this deep story have been reinvigorated and amplified by the diverse range of problematic information fed non-stop via internet platforms. The Illusion of More effect kicks in as consensus builds around repeated themes shared by like-minded people; and no amount of fact-checking, or even platform moderation per se, is going to dislodge misinformation from someone committed to finding evidence for his deep story.

And this folly knows no political loyalty. The “sudden” appearance of QAnon—a collective of conspiracy-minded Trump zealots who coalesced on chat boards like 4Chan—comprises both “right” and “left” identity types, but who share a common belief in a “deep state” conspiracy to which they imagine Trump stands in opposition. QAnon may be the main act in the center ring at the moment, but they are hardly the only clowns in the circus believing and spreading fake news. In fact, it would be a serious mistake—not to mention an arrogant one—to believe that disinformation is only aimed at, or effective upon, these caricatures. Take for example this statement:

“Morals, values, and identity will always defeat facts, reason, logic, and self-interest.”

If you think that reads like something out of a training manual for cult indoctrination or the Tao of authoritarianism, it actually comes from a slide deck created by Open Media to instruct its activists in the proper way to “frame” issues in support of—get this—digital rights! Think about that for a moment …

The fundamental premise of “digital rights” is that an “open internet” must thoroughly democratize speech and information because more information is inherently good for democratic society.  But Open Media states that the ideal way to evangelize these principles is to appeal to people’s emotions, because emotion will always defeat reason, thus contradicting the presumed value of information in the first place.

Intent of the Fakers Less Important Than the Effect of the Fakes

“…the networked nature of the internet and the ability to replicate and remix images, text, and video makes it impossible to determine where a particular idea, image, or meme originated, let alone pinpoint the intent of the author. This is particularly true considering the dominance of irony as an expressive and affective force in native internet content.”

To me, this statement by Marwick alludes to one of the most difficult problems in addressing the fake news disease—the often subtle correlation between the harmlessly entertaining and the poisonously effective. All those ironic, political memes (and I’ve shared a few) can have the tangible effect of eroding basic reason, even if the meme-maker was just going for laughs. “…messaging is reinforced through repetition; the more people see fake news headlines, the more likely they are to think they are accurate,” writes Marwick. “This is true even if the story is repeated in order to debunk it.”

In the seminal example of my friend sharing fake news in 2011, there was no way of knowing who had cut and paste the original “story” just like there is no way to know whether that political meme you just shared was made by some kid amusing himself or by a Russian hacker paid to make mischief or by some guy taking the wrong medication. But if we are indeed all steadily eroding our capacity for reason and widening the Overton window, Marwick warns that fact-checking is probably not the answer …

“Fact-checking is predicated on the assumption that people will change their mind [sic] when confronted with correct information,” writes Marwick, “which implies a very passive model of the audience … [but] this ignores a wide variety of social and cultural factors, and is not supported by empirical evidence. In fact, fact-checking may have the opposite effect of making stories ‘more sticky.’”

It is true that trying to get someone to reconsider a statement based on evidence alone is like trying to flick that nagging ball of Scotch tape from the end of your fingertips. The assumption that fact-checking is the antidote to fake news is derivative of the original, mistaken assumption that more information is the only solution to bad information. Marwick’s paper casts considerable doubt on the rhetoric that a society networked by information systems is inherently self-correcting, and it provides a conversation-starter that seeks a holistic approach to understanding why people share so much utter nonsense.

The why is important because it is largely a sociological or psychological inquiry rather than a purely technological—let alone legal—one.  As much as I advocate more platform responsibility in specific contexts, the fake news problem is not one we can blame solely on Facebook et al, or certainly expect these companies to solve for us. To the contrary, if Marwick’s line of inquiry is on the right track, it suggests that the question why is something most of us should constantly be asking ourselves.


Photo by NomadSoul1

In the News: Sarah Jeong, “Fake News”, & Fair-Use

It’s another one of those weeks when there’s stuff happening faster than I can write about any one thing. So, here’s a summary of a few items of note …

Anti-Copyright Ideologue Named Tech Writer at NYT

Twitter lit up yesterday with accusations that The New York Times has named a “racist” to its editorial board, citing anti-white tweets made by technology writer Sarah Jeong, who is Asian. These complaints read like a lot of whinging nonsense, taking Jeong’s comments out of the context in which she was apparently responding (albeit ill-advisedly) to racist or sexist remarks directed at her. (God, I love Twitter for the way it brings out our better angels.)

What is notable about Jeong as the Times’s new “lead writer on technology” is that she is an anti-copyright ideologue, who has written various articles and posts in a familiar, ill-informed style akin to Cory Doctorow. In February of 2016, I wrote a fairly extensive response to several errors she made in a Motherboard editorial predicting that copyright law might enable the Chinese government to disappear the famous “Tank Man” photograph from the internet.  It’s still online of course.

So, while I truly doubt Sarah Jeong is a racist and think the people labeling her as one should get a grip, I am equally skeptical that future NYT editorials on the intersection of technology and copyright will be well-balanced—or even accurate.

New Paper on Why People Share “Fake News”

Related to the above, I notice that the National Review site has two top stories featuring Sarah Jeong, the second of which is headlined “Yes, Anti-White Racism Exists.” This dumb and bogus narrative is what academic Alice E. Marwick would identify as a “deep story” in her new paper titled Why Do People Share Fake News? A Sociotechnical Model of Media Effects. Unable to fully answer that question yet, Marwick provides a complex nuanced framework for further discussion, identifying socio-cultural factors that cannot be overpowered by solutions like fact-checking.

Although the volume of what Marwick calls problematic information is greater among the contemporary “right” at present, the contemporary “left” is by no means immune to the underlying reasons why people are apt to believe and spread “fake news,” hoaxes, and other forms of disinformation. I’m working on a longer post summarizing Marwick’s paper, but for those interested, her full paper is here.

TVEyes Files for Cert at Supreme Court

Filing a petition for Supreme Court hearing in its ongoing litigation with FOX News, TVEyes hopes to get another shot at presenting arguments that failed in the Second Circuit in February of this year. Eriq Gardner for The Hollywood Reporter writes, “TVEyes’ attorney tells the Supreme Court that the 2nd Circuit decision conflicts with precedent and ‘creates a circuit split over a question of exceptional importance, including the proper balance under copyright law between the interests of a copyright holder and the First Amendment right to criticize and comment upon the copyright holder.’”

There is no brief to review yet, but that statement alone, taken from a request for an extension to file, does not seem to bode well for the Supreme Court granting cert for a couple of reasons. The first, as detailed in this post, is that the same appellate court that ruled in favor of Google Books also drew sharp distinctions between that case and TVEyes (ergo, maybe not so much of a split). The second reason is that it is consistent with precedent to hold that the First Amendment rights of users of a service do not automatically make the service itself non-infringing. This is a chronic argument made by tech-industry players, and as described in this post, courts generally take a dim view of corporations that attempt to “stand in the shoes” of their customers.

I’ll be surprised if SCOTUS agrees to review this case, but if it does grant cert, expect a storm of amicus briefs to follow.

EFF Honors Itself With Its Own Award

In a July 30 announcement, the Electronic Frontier Foundation named Stephanie Lenz, creator of the “Dancing Baby” video, among the recipients of this year’s Pioneer Award. “Stephanie Lenz’s activism over a home video posted online helped strengthen fair use law and brought nationwide attention to copyright controversies stemming from new, easy-to-use digital movie-making and sharing technologies.” Many of us will never experience the injustice of having a video removed and then restored to YouTube, but in that silent interval, when people could not watch Lenz’s baby boy dancing in the kitchen, her world—indeed the whole world—was just a little bit darker.

I wrote a post in October of 2016 summarizing the narrative of this decade-long EFFishing expedition; but suffice to say this award-earning “activism” did not even begin as a fair use case; “Fair-Use Champion” Stephanie Lenz stated her own ambivalence about the video remaining on YouTube; the fair use/DMCA argument itself is razor thin; and I would bet anything that, beyond us copyright watchers, “nationwide attention” sounds something like this: Oh yeah, didn’t Prince sue some mom? And that didn’t even happen.

So, in the same way that Stephen Carlisle described Stephanie Lenz as the “nominal plaintiff” in Lenz v. UMG, it seems reasonable to call her the nominal recipient of this award, which should rightly go to the EFF’s own Corynne McSherry for Outstanding Achievement in PR Through Boondoggle Litigation.