Democracy Officially Improved by Information Age

FB 1800

With the inevitability of Donald Trump’s nomination as the GOP candidate for president, I think we can officially declare the “information revolution” a rollicking success, don’t you?  When the savants and silicon pioneers of the 80s and 90s predicted that the Information Superhighway would be a great leap forward for democracy, I don’t remember anyone intimating that we would ride that highway to the demolition derby that American politics have since become. The unlikely, populist rise of an arrogant billionaire, whose monosyllabic campaign is textbook authoritarianism, is merely the latest extreme example suggesting  that information is utter bullshit.  Without context, without reason, without compassion and empathy, information is meaningless no matter how much its volume or speed of delivery may increase. Rarely, in all the theater of our post-internet politics, can it be said that Americans have been splashing about in the tide pools of ideas any more than we were 200 years ago, when information moved at the speed of the printing press and horse.

The image above refers to the highly-contentious campaign between Federalist John Adams and Republican Thomas Jefferson in 1800, in which the factions supporting these two founding fathers slung ugly at one another in ways that would have made shareholders at CNN wet themselves with pure Cristal.  Donald Trump’s circus of vitriol is amateur hour* compared even to the pundits of 1800.  As one writer for the Connecticut Courant wrote of Jefferson, whose deism was the focus of many a Federalist concern, …

Look at your houses, your parents, your wives, and your children.  Are you prepared to see your dwellings in flames, hoary hairs bathed in blood, female chastity violated, or children writhing on the pike and the halbert?

That’s the real stuff right there.  All Trump did to launch his campaign was insult every hispanic on earth.  But in 1800, the villain who was going to see to it that your women were violated and your children murdered was none other than the author of the Declaration of Independence himself. And according to the book Presidential Campaigns by Paul F. Boller, Jr., whence these stories come, a Connecticut woman really did try to hide her family Bible with a Jeffersonian friend for fear that the new president’s goons would soon be coming to confiscate and destroy it—her logic being that, “They’ll never think of looking in the house of a Democrat** for a Bible!” (Sounds like a theme we’ve heard for the past eight years regarding Obama and guns, no?)

Jefferson’s views remain central to the ongoing, constitutional debate on the separation between religion and state—a passionate argument that still produces behaviors as preposterous—if not more preposterous—as the woman hiding her Bible from the president.  American Christians in 1800 were apt to believe that Jefferson would end religion altogether in the United States—a falsehood that was largely manufactured by the Federalist party and Christian leaders, who relentlessly blasted Jefferson’s supposed atheism.  And Jefferson was not above firing back with some exaggeration himself, arguing that if, as magistrate, he were to declare national days of thanksgiving and fasts (as Adams and Washington had done), that the nation might as well reverse the revolution and return to rule by the English monarch, who was literally the head of the national church.  The separation issue, for Jefferson, was central to the rationale for republicanism itself—an idea not without historic merit, but a nuance lost amid the emotions of the public.

It had been less than a decade since the ratification of the Bill of Rights, and the realities of governance had already divided the heroes of the revolution and framers of the Constitution into snarling factions.  And though there was real animosity in many cases (e.g. the Federalist Hamilton hated Adams’s handling of the presidency), the public perception of the candidates’ true beliefs and ideas was as distorted by emotion and as exploited by the opposition as it is today. And these were the dudes who invented the country! Had there been Twitter and Facebook—had information moved as fast then as it does now, it is conceivable that the new and tenuous republic—which had not yet tested most of its constitutional principles—might not have withstood the heavy onslaught of utter nonsense that today aggregates so much empty-headed outrage into tangible political forces.

In getting to the real question, though, as to whether the internet has been good or bad for democracy, it’s hard to deny that it has certainly made what was already bad considerably more effective, which was never openly imagined in the ebullient, early days of the digital revolution.  It seems pretty clear now that groups and individuals who were previously and properly relegated to the “lunatic fringe” have coalesced via networked communications into bodies of political force that draw completely new—yet generally regressive—boundaries of political ideology.

The downside of “democratizing” the dissemination of information is that anybody gets to play and that anybody really does mean anybody. And because it is the nature of the internet to connect people to the information they want to know and then connect like-minded people to one another, we might have expected that the lunatic fringes of both the left and right would congregate at either end of the pole and give rise to new political factions among digital natives—factions that cannot properly be defined as classically liberal or conservative, democrat or republican.   And they like it that way.

Among the extreme left, we have the social justice warrior types—the ones who see micro-aggressions in every interaction, demand safe spaces on college campuses, refuse to read assigned classics they find “triggering,” and who use terms like cultural appropriation and patriarchy as excuses for disengagement while claiming to respect diversity. They are insufferable people, who have managed to use the privilege of their educational opportunities to invent new forms of cultural segregation, beginning with mandatory self-flagellation by all white, heterosexual males.  Naturally, the SJWs, as they are called, are an internet phenomenon; and their antagonists, the alt-right, also found one another in cyberspace.  Both sides have grown up expressing their political sensibilities in the intolerant lingo of Trollish; and the one aspect they seem to have in common is tribalism. Their sublimation of the individual for the sake of the hive is unquestionably a reflection of their digital nativity, and it is a quality that confounds sensibilities among both democrats and republicans for its inherent un-Americanness.

Jack Hunter, a conservative writing for The Daily Beast about the alt-right, describes a foggy space between troll-like behavior reacting to the identity politics of the social justice warriors and the extent to which that rhetoric inevitably finds kinship with honestly-meant white-supremacist views. Hunter writes, “ … the heart of alt-right tribalism leads to something that is definitively anti-libertarian and functionally authoritarian. The alt-right is characterized by an extreme collectivism that is unavoidably racist.”

When Donald Trump declares without a hint of nuance that, “PC in this country has gone too far,” he successfully rallies both the troll and the real racist to his brand of intolerance. In many ways, Trump’s nationalist theatrics are a thuggish version of the optimistic and reactionary campaign run by Ronald Reagan in 1980—invoking a nostalgia for an idyllic America that never existed–unless one views as utopian the kind of innocence that would play in the fog of DDT trucks, picnic at the edge of nuclear test blast zones, and demand that the races and sexes remain neatly organized into their “rightful places.” But the important shift in tone from affable Reagan to boorish Trump brings groups like the KKK, Neo-Nazis, and the openly racist alt-right out of the shadows and into the mainstream of national debate. Meanwhile, the Bernie-or-Bust crowd—many who would be voting for the first or second time—seem to have decided that if we cannot attain a new socialist utopia in the next four years, they’re just going to pack it in.

Certainly, there are many interrelated and complex reasons why our politics are the way they are, why they have always been this way. In a sense, I suppose we have to admit that the digital revolution has been “good for democracy” to the extent that vox populi is louder than ever.  Whether or not the voice is saying anything we can call progress is whole other question.


* Since the publication of this post, it is fair to say that Trump’s rhetoric has exceeded the hyperbole of the past.

**The Republican Party of Jefferson would later become the Democratic Party, but it was common to use the term in general discussion prior to the official change.

An Alternate History for Music, YouTube, & Everything Else

Take all the best qualities of the web and imagine for a moment that the boundaries of intellectual property ownership are respected and upheld–at least on the major, legal platforms.  Imagine, for instance, that YouTube still exists, but that one would not have typically used the platform to stream an unlicensed recording of a popular song by a popular artist.  Instead, in this alternate history, the artists’ individual websites developed as the only places where users could stream tracks, read lyrics, and even share tracks via social media.  Meanwhile, YouTube could still have evolved as a platform for original expression, including parodies and covers of popular songs, most of which would likely be left alone by the rights holders, just as they are now.

Of course, it’s hard to imagine YouTube having grown without its infringe-first/settle-later strategy, conveniently protected by flaws in the DMCA; but as long as I’m projecting a hypothetical, I ask readers to imagine what we might have gained or lost if the market had developed just a little differently in this regard.  YouTube was able to use the leverage of mass infringement in order to grow market share and turn the platform into a default destination for streaming music, but that’s not the only way this history had to unfold. If YouTube had never been able to—or had chosen not to—host millions of unlicensed, user-uploaded songs; and if the default user habit had instead been to first visit the artist website to do all the things they now use YouTube for, what would be lost for the fan?  I would argue nothing.  On the other hand, what would probably be gained is a more interesting, more diverse, and more entrepreneurial digital market for music makers and listeners alike.

Right now, if you visit a major star’s website, you probably won’t find full tracks to stream or share via Facebook, etc.  But if the artist site had an exclusive, if the millions of user-uploaded streams on YouTube alone were no longer part of the equation, I bet most artists would probably have begun to recognize the incentives to make streams available on their own sites.  Google could still sell advertising in this paradigm, except that the artists themselves (gasp) would have a stronger voice in negotiating terms because they would not be held hostage by the rock-and-hard-place deal in the YouTube model.

Even if we look at a fairly small band, like The Felice Brothers, who are popular local artists in the Hudson Valley where I live, this model could theoretically apply.  Their top ten tracks on YouTube have generated about 1.3 million total plays.  That’s not Taylor Swift or Adele territory, but if that traffic were driven exclusively to the band’s website, would it be worth it to the artists to provide streams, lyrics, and sharing embeds for social media?  Certainly it seems that capturing that traffic could not be worth less than the ancillary (or shared) value the band gets via the YouTube platform; and it could easily be worth considerably more simply because the fan would likely have a more in-depth engagement via the official website.

At the same time, Google could do its thing, like recommend other artists based on your liking The Felice Brothers, and it can even monetize that piece of the transaction without actually having to “own” the experience that rightly belongs to the artists.  That would be less attractive to Google and its shareholders, I’m sure, but we’re talking user/creator experience here, not revenues for one huge company.

As I say, I believe user experience overall could be much richer than it is.  Imagine a teenager wants to hear a new song a friend played for her, but she doesn’t remember who the artist is or even the correct title of the song.  This is, of course, where Google makes her young life better than ours was; its page rank algorithm helps her (even though she only knows a few terms) find the artist’s website in a matter of seconds. Here, she is not only able to listen to the song she had in mind, but she’s also more inclined to learn something about the artist(s), more likely to explore other tracks, share music she finds on social media, read lyrics etc., and begin to discover how big a fan/consumer she will become.  Just finding a copy of a song that some other fan uploaded to YouTube doesn’t really offer much of a relationship at all for the prospective new fan.

The point is that, technically, all of the best features for both artists and fans could still exist in an online market in which YouTube is exclusively the platform it claims to be—a place for original expression—rather than the platform it is—a place for original expression and massive infringement of popular creative works.  And I think this is more or less how many of us in the 1990s imagined the web might evolve—as a more diverse market for entrepreneurism rather than a consolidated market with a few dominant platforms that figured out how to commandeer the relationship between a fan and creator, and then sell that relationship back to both parties by converting the transaction into ad sales.

Of course, after acquiring all the traffic that may otherwise have gone to the artists’ individual sites, YouTube was then able to position itself as indispensable and, therefore, free to dictate–and change–terms at will.  Even the revenue-sharing program through Content ID was only introduced after YouTube had cornered substantial market share by means of user-generated infringement shielded by the DMCA.  And based on comments from both entertainment attorneys and independent musical artists I know, Content ID may best be described as a mercurial and inscrutable arrangement for smaller creators and/or a tool used to leverage the platform’s ill-gotten market share to make a take-it-or-leave it “deal” with the majors.  Yet, for all the ways the YouTube platform siphoned off financial value and weakened bargaining power for may types of music creators, it’s not at all clear that we fans really needed the platform in order to enjoy exactly the same experiences we could have in a more diverse market distributed across multiple sites.

There may be no going back, of course; but in the larger dialogue about issues like YouTube’s extraordinary leverage with creative artists and the extent to which the DMCA provides cover for the predatory, winner-take-all nature of these platforms, I think it’s important to remember that the way things are is not necessarily the way they had to be–or have to remain. This is, in fact, one of the underlying themes running through every criticism I’ve read by Jaron Lanier, formerly one of the leading architects of these systems, but who now consistently argues that the web we have is engineered backwards—so that humans serve the computers rather the the other way around. And rather than think of the the design of Web 2.0 as having been inevitable—as technologically deterministic—that it in fact functions exactly as humans coded it to function.  As such, it is not entirely impossible or unreasonable to imagine how it might be better.

 


Photo by pkorbel

R Street & Techdirt Dissing Prince

“R Street is a free-market think tank with a pragmatic approach to public policy challenges.”         — R Street About Page —

If one is going to comment on public policy, then one ought to make an effort a) to understand the nature of a given topic; and b) to present facts instead of fiction.  In this regard, R Street might want to be careful about republishing articles from the blog Techdirt, as it did last week with this Op Ed by Zach Graves all about what Prince did wrong in the management of his career.

Graves notes that although Prince was a musical genius, he was one who “…never quite found the right approach when it came to licensing his music for redistribution—in spite of the fact that he sold over 100 million records, placing him among the best-selling artists of all time.”  If it seems as though the second half of that statement contradicts the first, that’s because it does.  When you combine terms like musical genius and best-selling artist, it takes some chutzpah to presume to know best—in a post-mortem analysis—how the artist in question might have made wiser choices.  In fact, Graves is working overtime trying to shoehorn Prince into an online market the artist rejected. He writes, “ … his fans were left in an odd position, on the news of his death, of being frequently unable to provide links to Prince’s massive oeuvre.”

Speaking as a representative of the 80s, and of those who have been Prince fans since he first emerged, that statement is not only surprisingly disrespectful to the wishes of a beloved artist who just passed away, but it lays bare a mindset that actually believes these fleeting moments on social media are of much greater emotional and cultural value than they really are.  The shared sense of loss among Prince’s fans is not diminished because a friend does not post “Little Red Corvette” on Facebook via YouTube.  Micro-moments like these are fine, occasionally interesting, but are utterly forgettable because of the very nature of the interface itself. Our relationships with Prince’s music, as with all music, are based on associations his songs have with tangibly profound, wonderful, painful, or intimate moments in our lives.  And if the next generation doesn’t form these same types of realtionships with music, then they probably won’t relate to music at all.  Meanwhile, the fact that YouTube & Co. were deprived of a few million advertising impressions they would have sold on the trending of Prince’s death is exactly what he wanted to deny these companies.  He saw through the lie that the OSP’s revenue model has anything whatsoever to do with his music or our love of it. And he was absolutely right.

It is fairly well known that Prince spent considerable effort and resources during his career in order to gain and maintain control of his work.  Quite simply, he did not like anyone dictating how, when, or where his music ought to be distributed—not Warner Music 23 years ago, not YouTube last month.  So, the fact that Graves chose to compare and contrast the market potential of Tidal (which licensed Prince) with Spotify (which Prince rejected) is entirely irrelevant, whether Graves’s math in this case is sound or not.  Prince was a Mozart.  And it’s rare to see that kind of genius without the individual also being fiercely proprietary about his work.  And although Graves acknowledges that Prince’s decision to license exclusively through Tidal “may have been a reflection of his proclivity to assert tight control of his brand,” he remains steadfast in his bias when he writes “…making music less accessible poses serious challenges for artists and consumers alike.  For one thing, as English singer/songwriter Lily Allen explains, it will reinvigorate incentives for piracy.”

News flash:  Prince did not disappear into obscurity despite his rejection of these “free” platforms.  Yet, somehow, Graves believes the “lesson” we are supposed to draw from Prince’s legacy is that this hugely successful, influential, and universally-respected artist was fundamentally wrong, while the new-economy sages at Techdirt and R Street are right. Their logic says that if the artist chooses not to be fractionally exploited by a YouTube or a Spotify, then he naturally deserves to be fully exploited by outright piracy.  Put that way, it sounds more outrageous, right? But that’s essentially what Graves and others are saying.  Prince told that proposition to go screw itself, and maybe that’s the real lesson he leaves behind.

Of course, Graves actually reprises the blame-the-artist-for-piracy theme because he wants to point readers to a remarkably obtuse statistic presented by Techdirt founder Mike Masnick’s very own, brand-new “think tank” called Copia.  The stat says that, “55% of 18-29 year-olds pirate LESS when offered a free, legal alternative.”  Wait for it. It’ll happen…

I’m no longer amazed at the capacity some people have for presenting bad news as if it were good news.  Because somehow Copia et al think nobody will notice that the truly stunning fact revealed by this stat is that 45% of the demographic will continue to pirate as much as ever no matter what free, legal alternatives are available.  But creators should feel molified by the prospect that the other 55% of the market will pirate less!  It is certainly indicative of a Kool-Aid narcosis that Techdirt, Copia, and R Street would even present these data with a straight face. After all, if one were to provide the same market research to the dumbest investor on earth, no matter what the business sector, he would tell you that no investment will be forthcoming.  Try pitching investors and telling them that 45% of the target market is guaranteed to steal from you while 55% of the market will only steal some from you, and watch what happens.

Perhaps most importantly, R Street in particular should be held accountable for republishing an article that completely misrepresents the facts in what is commonly called the “dancing baby” case.  Graves writes …

“Famously, Prince, via Universal Music, was behind the “dancing baby” DMCA lawsuit, which featured Prince’s “Let’s Go Crazy” playing faintly in the background of a short clip as a toddler danced. Ultimately our friends at EFF, who were representing defendant Stephanie Lenz, prevailed on their fair use claim. In 2013, EFF awarded him their “Raspberry Beret Lifetime Aggrievement Award” for “extraordinary abuses of the takedown process in the name of silencing speech.”

Setting aside the relatively minor detail that Prince himself was never directly involved in this case, the most important fact is that EFF sued UMG after the Lenz video was actually restored to YouTube via counter-notice procedure; and nobody ever sued Stephanie Lenz–at least not pertaining to this matter.  The reader is free to review the facts of this eight-year litigation and decide for himself whether the temporary takedown of the “dancing baby” video represents “award-winning” abuse of DMCA—or if perhaps the EFF chose this case because it would inevitably lead to misrepresentation exactly like the quote above.  “Prince sues mom and baby” makes good drama, but it just didn’t happen.  And to say that it did in the immediate aftermath of this artist’s passing is as rude as it is irresponsible.

As for Zach Graves’s concern that Prince’s music may not “reach a new generation of fans” due to its absence from certain free platforms, I’d like to tell him not to worry.  Prince’s work has touched millions of people and influenced thousands of other musicians around the world. It will transcend generations in spite of what web platforms have done to culture and memory itself.  At the same time, although YouTube’s predatory and monopolistic strategy may position its platform as “essential” in a certain sense for musical artists, one must ask if this winner-take-all outcome is the kind of “free-market pragmatic approach” R Street policy hopes to support.  The idea that Prince’s music needs YouTube in order to live on in our cultural memory would be a quaint conceit if it were not the kind of arrogant proposition that has hypnotized many policy thinkers by means of ceaseless repetition.