Mach (digital) Tuck?

Not long after I wrote a post suggesting there is little difference between naive human engagement and bot engagement on policy issues, a couple of things happened.  One was the publication of a story by Max Read in New York Magazine reporting that a substantial (though hardly surprising) amount of material and people on the internet are fake.  The other thing was a recent get-together with my best friend of 30 years, who told me that his eighth-grader son said he was worried that “Article 13 is going to destroy YouTube.”  Perhaps most tellingly, the kid complained to his father, “The [legislative] language is too vague!”

Now, I love my friend’s kids as I love my own, and I hope they and their contemporaries become engaged citizens as they grow up; but none of them is yet qualified to have an informed opinion about the precision, or lack thereof, in the language of bills proposed at the European Parliament or any other legislative body.  The broad irony here is hard to miss:  In contemporary, digitally-distracted America, we can only hope the next generation learns about the constitutional separation of powers and other rudimentary civics, while YouTube videos frighten them about “hazards” lurking in arcane proposals they cannot possibly have the background to understand.

This is no way to run a liberal democracy, yet my friend’s son in this scenario, to no fault of his own, has been turned into a bot.  Fortunately, his father doesn’t generally take the policy views of his children at face value, but you can bet that more than few parents are apt to be less attentive when it comes to some obscure (let alone European) bit of cyber-policy.  Their kids announce,  “Hey, XYZ will destroy the internet,” and the next day, a meme with the same message just happens to appear on Facebook, Mom or Dad shares it, and civilization is destroyed one guileless click at a time.  

Of course, it isn’t one click at a time.  It’s millions of clicks that, apropos the aforementioned New York article, do not even represent real people in many cases.  In fact, Max Read states, “Studies generally suggest that, year after year, less than 60 percent of web traffic is human; some years, according to some researchers, a healthy majority of it is bot.”  

If that statistic is off by even a wide margin, that is still a hell of a lot of bot, which leads me to the following question:  If we combine total bot + humans acting like bots + hackers & other disinformation brokers, how close are we to approaching maximum inescapable bullshit?  We might also add just plain bad reporting under brand names like this piece on forbes.com about Article 13, which refers to the non-existent “FAIR USE Act” — an error gross enough to disqualify the author from commenting on copyright law at all.  But here we are.

It was the same best friend mentioned above who, when we were college freshmen, taught me the aeronautical term mach tuck.  Put simply, this is a hazardous condition that occurs, usually in subsonic aircraft traveling faster than they’re supposed to, when the airflow over the wing nears, or even exceeds, Mach 1.  This causes the nose of the plane to pitch downward, and in an aircraft not designed to fly at or above the speed of sound, correction may not be possible.  Thus, the downward pitch naturally causes the plane to accelerate, which exacerbates the problem by increasing the windspeed over the top of the wing.  This vicious cycle called mach tuck can plunge the aircraft into an unrecoverable dive.  

Obviously, I took that aerodynamic detour to ask whether the “information age” has achieved—or could soon achieve—mach tuck?  Is the flow of garbage streaming so quickly, and accelerated by forces beyond our control, that we are about to enter—or are already in—an information nose dive from which we cannot recover?   

Optimistically, I don’t think so.  While it is not entirely clear that the pros of digital engagement sufficiently outweigh the cons, the pros are still evident and abundant.  No one can, or should, doubt that there is a wealth of useful, credentialed, instantly-accessible information online.  The challenge, though, is that the major social platforms—which happen to be owned by the wealthiest and most politically-influential corporations—distort information through a mosaic of visual stimuli that probably overpower critical thinking.  Assuming that’s true, things may be about to get worse before they get better.

I suspect 2019 will be the year a lot more people hear about “deepfakes,” a software developed by Google, which enables even a modestly-skilled individual to effectively “skin” any face onto any body in a video clip.  Both “deepfakes” and the more elaborate CGI capabilities of motion picture suppliers have raised new questions about publicity and a celebrities’ rights to control the use of their likenesses in mainstream fare.  And, yes, not at all surprisingly, the likenesses of popular actresses have indeed been grafted into pornographic scenes in which they did not appear.  

But while these issues for mainstream performing artists are debated as a matter of policy and contract law in the coming year, stay tuned for information warfare to get a lot uglier through the use of these technologies.  New darling Democrat Alexandria Ocasio-Cortez was throwing molotov cocktails at nuns?  Of course she was, here’s the video.  Is that Rep. Doug Collins giving a speech at a Nazi rally?  Roll that clip.  And all of it hosted on YouTube which doesn’t have to remove any of it because…free speech?  

The old cliché that “seeing is believing” has always been a duplicitous axiom.  Just about any major critic who has ever written about photography will tell you that seeing may result in believing but that this should not be confused with seeing the truth.  “…the camera’s rendering of reality must always hide more than it discloses,” wrote Susan Sontag in 1973, long before the technological capacity for amateurs to create realistic, moving pictures, depicting real people in scenes for which they were never present.

Of course, if fake video clips like I describe are deployed en masse, there is the possibility that this could trigger a healthy skepticism for believing what we see.  Presumably, this will depend on the degree of subtlety employed by the manipulators, and it is worth noting that the hackers at the St. Petersburg-based Internet Research Agency have been described as both subtle and sophisticated in their use of disinformation on social media. 

Of course, if rampant fake video were to induce new skepticism, this implies a potential  new hazard—that we no longer believe what is, in fact, true (i.e. mach tuck).   These are the kind of challenges that companies like Google should be helping to address instead of spending their vast resources to scare the kids about comparatively modest proposals like Article 13 in the EU Single Market Directive. 

The irony that these companies invoke free speech in their efforts to protect their own revenue could not be more pellucid as their platforms and policies literally help to unravel the very reason speech was a protected civil right in the first place—a hope among a handful of 18th century idealists that the electorate, while always debating what should be, might at least find common ground in what is.

Might As Well Be Bots

So, I don’t engage very often via Twitter, but once in a while, I respond to something that catches my attention and then usually regret spending time responding to the responses.  Last week, I noticed that Pirate Party MEP Julia Reda—the face, voice, and tweetdeck of anti-Article 13 activism in the EU—posted an odd tweet, and I replied … 

Because, of course, even if Tumblr’s efforts to use AI to identify pornography are a) accurately reported; and b) laughably ineffective, it is misleading for Reda to suggest that this folly is particularly instructive to the purpose or eventual function of Article 13.  What she means to imply, of course, is that any comparable technology, which may be used to identify content that allegedly infringes copyright on a large platform like YouTube, will result in the same kind of errors that are reportedly happening on Tumblr.  

I consider this tweet to be scare-mongering for many reasons, but here are three simple ones:  1) existing technologies currently in use for identifying copyrighted material is already better than whatever is being described in the Tumblr/porn example; 2) it is needlessly defeatist to say that these technologies can never be improved and/or supported by human oversight to alleviate error; and 3) if any copyright ID system is too aggressive and error-prone, the rights holders advocating Article 13 aren’t going to like it either.  All of which leads me to conclude that calming down is more rational than, once again, declaring that the internet needs to be “saved.”

So, Reda and I had a brief exchange in the service of nothing (i.e. the reason I don’t like the forum), and went our merry ways.  But I did notice that at least one of the “users” who liked and retweeted one of Reda’s responses to me looked an awful lot like a bot.  The account was a handle and an avatar, it had just a few followers, and its tweetdeck was almost exclusively about the evils of Article 13.  So, while scrolling that thread and wondering whether the account might be a bot programmed to RT anti-Article 13 stuff, I saw this tweet posted by a different account with zero followers…

And this prompted a new thought.  What difference does it make if a tweet like the above is posted by a bot or a real person?  Because if social media platforms like Twitter train real people to respond with pavlovian certainty to any given issue, they might as well be bots.  Either this individual simply doesn’t know that the platforms he says will be “destroyed” are the biggest of big corporations; or he is so well trained to respond to certain signals, that he’ll just remain blissfully unaware of his own cognitive dissonance.  Or he’s a bot.  

Either way, same result.  Some other bot, or mindless person, or ten-year-old child repeats the unfounded assertion that, for instance, the legislative language is “vague,” and boom—it’s now a fact.  Why would anyone take some anonymous tweet at face value which claims that a body of legislative language—in any area of law—is vague?  Because they want to believe it, and the desire to perpetuate that narrative is sustained by knowing diddly squat about the legislative language itself.

When the hyperventilating saga that was the anti-SOPA campaign peaked in early 2012, the internet giants still enjoyed a general benefit of the doubt that they had built platforms that were truly making global democracy work better.  (And that they had built these platforms out of the goodness of their hearts!)  So, all those anti-SOPA headlines warning people not to let anyone “break the internet” were understandably hard to counter with any kind of cool reason.

Today, though, it is curious—if not a little bit frightening—that even after the fallout from stories like Cambridge Analytica, the “Save Your Internet” battle cry is still effective in the current opposition to Article 13.  It is, after all, a reprise of the same digital dirge that was so effective seven years ago; but surely, this general call to arms cannot jibe with what we’ve learned over the past two years about major platforms and a number of paradigms not worth saving.

In 2011, I wondered how many non-constituents were marshaled to stop American legislation (SOPA/PIPA) in its tracks.  How many foreign citizens? How many children?  How many bots?  Because, as David Lowery has detailed in a multi-part post, the methods employed by Big Tech to sway public policy may be one of a handful of legitimate threats to democracies around the world.  And on this topic as to who—or what—is being rallied to action, TorrentFreak published a new post that misses, or purposely obfuscates, a very important distinction.  Andy calls hypocrisy on the IFPI for criticizing Julia Reda’s shout-out to children in this tweet:  

Andy compares Reda’s appeal to children to the fact that major copyright interests have often launched initiatives to educate kids about copyright and piracy.  “…it’s pretty ironic that IFPI has called out Reda for informing kids about copyright law to further the aims of ‘big tech companies’. As we all know, the music and movie industries have been happily doing exactly the same to further their own aims for at least ten years and probably more,” he writes.   

But the differences between Reda’s targeting kids on social media and the kind of initiatives Andy refers to are substantial and significant.  At a very basic level, educating children about how copyright works may be offensive to the pirates out there, but copyright has been part of the legal fabric of Europe and the U.S. for a couple of centuries, so it’s not exactly propagandist to explain its function in age-appropriate ways to groups of schoolchildren.  And given the fact that plagiarism can end someone’s college or university career, the foundations of copyright are in no way anathema to general education.  Outreach to schools on the subject of piracy and copyright tend to include the following lessons or discussions:

  1. education about the skilled people who make the movies, music, etc. people enjoy.
  2. a message that taking things without paying for them is both illegal and wrong.
  3. a message that respecting creators falls under the principle of the golden rule.

While these themes may be antithetical to pirate rationalizations for mass infringement, they’re not exactly outliers to the fundamentals that most people try to teach their kids. (Substitute creator for farmer, and the discussion will be very similar.)  In contrast to the implications of Reda’s tweet, major rights holders don’t generally engage children via Twitter to take direct action aimed at promoting or stopping specific legislation.

Though nobody can doubt that when organizations like movie studios fund education programs in response to piracy, the effort is industry-serving, but those types of broad initiatives do not compare to an elected official addressing teens and tweens on social medial and telling them (untruthfully) that YouTube won’t work anymore because of a policy those kids are not going to understand. And that fairly well sums up what I think about all of this—that the so-called defenders of “the internet,” who appeal to democratic principles in that effort, consistently demonstrate exactly why “the internet” isn’t worth defending.  


Photo source by davincidig

YouTube’s Tactics Re. Article 13 Are the Real Concern

When a media conglomerate is the subject of a news story, we expect the news organization owned by the parent company to acknowledge that relationship in its reporting.  So, when ABC News reports a story, positive or negative, about the Disney Corporation, it is standard practice that the reporter remind viewers that she is talking about her ultimate employer.  Unfortunately, the paradigm is very different when it comes to new media companies like YouTube, which can leverage the global reach of its platform (fueled by the capabilities of Google) to evangelize any message that serves its policy interests. 

In a new guest post on The Trichordist, Volker Rieck lays out the manner in which YouTube uses the power the platform to influence public debate (i.e. scare the bejesus out of people) when seeking a policy outcome favorable to the company.  After CEO Susan Wojcicki addressed the community of YouTube creators in a blog post and video warning them that Article 13 of the EU Digital Single Market Directive threatens their livelihoods, she got the response she was looking for.  As Rieck describes…

“Wild claims circulated that YouTube channel operators would already see their livelihoods threatened in 2019, that Article 13 was a censorship law, and so on. The platform helped the videos made in response to its own appeal to become highly visible and to reach wide audiences by displaying them on user home pages and by categorizing them as “trending.” Three of the top 5 videos in the YouTube trending charts at the beginning of November transported these dystopian visions.”

So, apropos my intro, even if the claims and assumptions made about Article 13 were accurate—and they are not—it should be more than a little frightening that a corporation with the scope of influence of YouTube can so effectively shape reality in regard to any matter of public policy.  To quote a recent post by Neil Turkewitz, responding to the EFF’s lopsided approach to Article 13, he summarizes the current draft of the directive in the following sober terms:

“… it requires large commercial platforms who are in the business of content distribution (defined in the legislation) to license the works that they are distributing, and to take steps to guard against the distribution of works for which it is not licensed. While the use of filters is not explicitly mentioned (unlike an earlier version of the Article), it is anticipated by most parties that most covered platforms would discharge their obligations to prevent distribution of infringing materials through the use of available technologieseither bespoke like ContentID, or off the shelf from a supplier like AudibleMagic. 

It is also important to keep in mind that, while it is timely for all creators (including YouTubers) to become better informed about Article 13 and to weigh in on the merits of the proposals, it will take at least a couple of years for all of the member states to implement the directive.  Thus, YouTube’s efforts to panic its entrepreneurial creators this month should be reason enough to question both its methods and its motives.  Is it really about those creators, or is it about a $160-billion company not wanting to pay license fees to other creators?

On the one hand, this type of scare-mongering is business as usual.  A corporation or industry doesn’t want the responsibility or cost of complying with a proposed law, and so tells consumers or employees (or both) that they will suffer if the policy in question were to be implemented.  But on the other hand, when a media platform like YouTube claims that a new policy will have “unintended consequences” like shutting down various channels, the company is uniquely empowered to spread its self-serving message and to manipulate user experiences in order to prioritize that message over other narratives.  As Rieck puts it …

“Ultimately, the way YouTube channels have been pressed into the service of the platform demonstrates just how urgent the need for measured political regulation of the platform has now become and how easy it is for the platform to exploit the ecosystem of private and semi-professional pseudo-journalism it hosts for its own ends.”

I would go so far as to at least entertain the possibility that YouTube could shut down or severely limit various channels as a false-flag tactic aimed at sowing further resentment against proposals like Article 13.  Perhaps the company would never engage in such an underhanded scheme, but really, what’s to stop them?  After all, they are already willing to engage in bad-faith PR designed to mislead YouTubers about the true nature of the EU directive.  In her open letter to YouTubers, CEO Susan Wojcicki, states:

“Article 13 as written threatens to shut down the ability of millions of people — from creators like you to everyday users — to upload content to platforms like YouTube. And it threatens to block users in the EU from viewing content that is already live on the channels of creators everywhere. This includes YouTube’s incredible video library of educational content, such as language classes, physics tutorials and other how-to’s.”

Really?  Even if we set aside the fact that Article 13 is a proposal to develop protocols that will take time and further negotiations to implement (if they happen at all), this statement implies that a very high percentage of YouTube channels rely substantially on unlicensed copyrighted material.  If that’s the case, why the should that status quo be preserved?  I’ve seen a lot of funny, informative, creative videos produced for YouTube that do not make any use of other creators’ protected works. 

For the YouTube creators who do use some portion of protected works, Wojcicki raises a subtle but important dichotomy when she addresses them as “a diverse community of creators who are building the next generation of media companies.”  Because that sounds to any reasonable person like a business enterprise.  And if these YouTubers are indeed engaged in business, then why shouldn’t they have the same responsibilities as every other type of professional creator to work within boundaries that respect copyrights?

It seems that when it suits the platform’s interests, we are meant to think of YouTubers as either hapless children (remember Lawrence Lessig?), who cannot be expected to know about copyright; or we are meant to think of them as the vanguard generation of new creative professionals, who should not be burdened by copyright.  Notice how, in either case, YouTube seeks to avoid its responsibility—as the only multi-billion-dollar media company in this narrative—by aligning its interests with the interchangeable interests of its users.

I recognize that underlying YouTube’s ability to frighten this class of creators about Article 13 is a litany of mistakes and abuses of existing models like Content ID or the DMCA notice and takedown process.  YouTube creators have had their own works targeted, either through error or willful misuse of these systems; and bad actors have targeted works they do not legally represent. 

While the anecdotes of bad-faith use of these systems are true, they feed a broader narrative which is not true:  that abuse of content-filtering systems is so rampant that the status quo is preferable to any attempt to make these systems work better for all stakeholders.  The status quo may be working for YouTube’s bottom line, but it certainly is not working for rights holders whose works are infringed at uncontrollable volume on the platform.   In fact, I have yet to see any data that even indicates that filtering or DMCA abuse is anywhere near the scope of infringement.  

Meanwhile, assuming Article 13 becomes law in the EU, YouTube creators have at least a couple of years to assess the extent to which their channels truly rely on the protected works of other authors.  Those who do not use other people’s works should be entirely unaffected; and if they are, their complaint may be properly directed at YouTube rather than Article 13.  Creators who use protected works legally—either by license or fair use—should play a particularly active (but informed) role in these developments.  

As professional creators, I suspect YouTube creator interests will increasingly share common cause with other types of creators.  In fact,  YouTube’s July launch of its Copyright Match system to address creator-to-creator disputes certainly suggests that YouTubers care about their own copyrights and should, therefore, take a proactive rather than a reactive look at the goals of Article 13.  After all, with regard to the way Wojcicki’s letter spawned a lot of misinformed outrage, it’s worth noting that just because this class of creators uses YouTube is no reason to let YouTube use them.


Source illustration by studiostoks