Mach (digital) Tuck?

Not long after I wrote a post suggesting there is little difference between naive human engagement and bot engagement on policy issues, a couple of things happened.  One was the publication of a story by Max Read in New York Magazine reporting that a substantial (though hardly surprising) amount of material and people on the internet are fake.  The other thing was a recent get-together with my best friend of 30 years, who told me that his eighth-grader son said he was worried that “Article 13 is going to destroy YouTube.”  Perhaps most tellingly, the kid complained to his father, “The [legislative] language is too vague!”

Now, I love my friend’s kids as I love my own, and I hope they and their contemporaries become engaged citizens as they grow up; but none of them is yet qualified to have an informed opinion about the precision, or lack thereof, in the language of bills proposed at the European Parliament or any other legislative body.  The broad irony here is hard to miss:  In contemporary, digitally-distracted America, we can only hope the next generation learns about the constitutional separation of powers and other rudimentary civics, while YouTube videos frighten them about “hazards” lurking in arcane proposals they cannot possibly have the background to understand.

This is no way to run a liberal democracy, yet my friend’s son in this scenario, to no fault of his own, has been turned into a bot.  Fortunately, his father doesn’t generally take the policy views of his children at face value, but you can bet that more than few parents are apt to be less attentive when it comes to some obscure (let alone European) bit of cyber-policy.  Their kids announce,  “Hey, XYZ will destroy the internet,” and the next day, a meme with the same message just happens to appear on Facebook, Mom or Dad shares it, and civilization is destroyed one guileless click at a time.  

Of course, it isn’t one click at a time.  It’s millions of clicks that, apropos the aforementioned New York article, do not even represent real people in many cases.  In fact, Max Read states, “Studies generally suggest that, year after year, less than 60 percent of web traffic is human; some years, according to some researchers, a healthy majority of it is bot.”  

If that statistic is off by even a wide margin, that is still a hell of a lot of bot, which leads me to the following question:  If we combine total bot + humans acting like bots + hackers & other disinformation brokers, how close are we to approaching maximum inescapable bullshit?  We might also add just plain bad reporting under brand names like this piece on forbes.com about Article 13, which refers to the non-existent “FAIR USE Act” — an error gross enough to disqualify the author from commenting on copyright law at all.  But here we are.

It was the same best friend mentioned above who, when we were college freshmen, taught me the aeronautical term mach tuck.  Put simply, this is a hazardous condition that occurs, usually in subsonic aircraft traveling faster than they’re supposed to, when the airflow over the wing nears, or even exceeds, Mach 1.  This causes the nose of the plane to pitch downward, and in an aircraft not designed to fly at or above the speed of sound, correction may not be possible.  Thus, the downward pitch naturally causes the plane to accelerate, which exacerbates the problem by increasing the windspeed over the top of the wing.  This vicious cycle called mach tuck can plunge the aircraft into an unrecoverable dive.  

Obviously, I took that aerodynamic detour to ask whether the “information age” has achieved—or could soon achieve—mach tuck?  Is the flow of garbage streaming so quickly, and accelerated by forces beyond our control, that we are about to enter—or are already in—an information nose dive from which we cannot recover?   

Optimistically, I don’t think so.  While it is not entirely clear that the pros of digital engagement sufficiently outweigh the cons, the pros are still evident and abundant.  No one can, or should, doubt that there is a wealth of useful, credentialed, instantly-accessible information online.  The challenge, though, is that the major social platforms—which happen to be owned by the wealthiest and most politically-influential corporations—distort information through a mosaic of visual stimuli that probably overpower critical thinking.  Assuming that’s true, things may be about to get worse before they get better.

I suspect 2019 will be the year a lot more people hear about “deepfakes,” a software developed by Google, which enables even a modestly-skilled individual to effectively “skin” any face onto any body in a video clip.  Both “deepfakes” and the more elaborate CGI capabilities of motion picture suppliers have raised new questions about publicity and a celebrities’ rights to control the use of their likenesses in mainstream fare.  And, yes, not at all surprisingly, the likenesses of popular actresses have indeed been grafted into pornographic scenes in which they did not appear.  

But while these issues for mainstream performing artists are debated as a matter of policy and contract law in the coming year, stay tuned for information warfare to get a lot uglier through the use of these technologies.  New darling Democrat Alexandria Ocasio-Cortez was throwing molotov cocktails at nuns?  Of course she was, here’s the video.  Is that Rep. Doug Collins giving a speech at a Nazi rally?  Roll that clip.  And all of it hosted on YouTube which doesn’t have to remove any of it because…free speech?  

The old cliché that “seeing is believing” has always been a duplicitous axiom.  Just about any major critic who has ever written about photography will tell you that seeing may result in believing but that this should not be confused with seeing the truth.  “…the camera’s rendering of reality must always hide more than it discloses,” wrote Susan Sontag in 1973, long before the technological capacity for amateurs to create realistic, moving pictures, depicting real people in scenes for which they were never present.

Of course, if fake video clips like I describe are deployed en masse, there is the possibility that this could trigger a healthy skepticism for believing what we see.  Presumably, this will depend on the degree of subtlety employed by the manipulators, and it is worth noting that the hackers at the St. Petersburg-based Internet Research Agency have been described as both subtle and sophisticated in their use of disinformation on social media. 

Of course, if rampant fake video were to induce new skepticism, this implies a potential  new hazard—that we no longer believe what is, in fact, true (i.e. mach tuck).   These are the kind of challenges that companies like Google should be helping to address instead of spending their vast resources to scare the kids about comparatively modest proposals like Article 13 in the EU Single Market Directive. 

The irony that these companies invoke free speech in their efforts to protect their own revenue could not be more pellucid as their platforms and policies literally help to unravel the very reason speech was a protected civil right in the first place—a hope among a handful of 18th century idealists that the electorate, while always debating what should be, might at least find common ground in what is.

David Newhoff
David is an author, communications professional, and copyright advocate. After more than 20 years providing creative services and consulting in corporate communications, he shifted his attention to law and policy, beginning with advocacy of copyright and the value of creative professionals to America’s economy, core principles, and culture.

Enjoy this blog? Please spread the word :)