Patent Hypocrisy Raises Privacy Concerns

A few posts ago, I reported that the major lobbying muscle in the Internet industry backs a patent “reform” bill (HR 9) called the Innovation Act. I argued in that post that while this reform claims to eliminate nuisance “patent trolls” from clogging up the system with dubious claims, what it really does is eliminate competition from the market.  Because, while the Silicon Valley PR hydra continues to sell the message that intellectual property is an outdated concept in the digital age — one that is chilling the general public’s civil liberties no less — they don’t actually mean that IP is outdated for them, just for everyone else.

Not surprisingly, a web search is no way to get a quick answer as to how many patents these companies hold. Most especially, typing “How many patents does Google have?” into a Google search yields a rather opaque set of first results from the “organizer of all data.” I had to use Bing to get to this article from 2013 in MIT Technology Review, which indicates that since 2007, Google has accelerated its patent activity to the tune of over 1,500 awards per year.  This is still far behind IBM, but not bad for a company that keeps telling the public the USPTO is “overwhelmed” by applications and flimsy claims. This 2012 article from ZD Net estimates that Facebook owned 812 patents at the time of publication, nearly all of these purchased from IBM in a single week as an apparent move to build up its defensive position against litigation from Yahoo! and Mitel.  And this 2014 article in IP Watchdog offers some praise to Facebook for its “more developed” patent strategy in contrast to Twitter vis-a-vis market valuation.

I mention this final example to make the point that there is nothing inherently wrong with these companies availing themselves of IP protections; what’s wrong is the hypocrisy of backing policy change that would create an uneven playing field for big vs small.  To put that in brass tacks, if one of these big boys infringed some IP you created, it isn’t enough that you’d be at a financial disadvantage in a lawsuit, but they’d also rewrite the law to possibly label you a “troll” in order to invalidate your claim in the first place.

But so what? These are the real innovators, right?  They’re innovating a brighter future for everyone and doing it all for free in the name of freedom and open freeness and free openness and disruptive free open innovation and freedom. Right?  Yeah.  So, here’s one of Facebook’s latest patents, Patent No. 9,100,400, which this excerpt from a post by the law firm of Gottlieb Rackman & Reisman explains clearly:

“In the patent, Facebook explains that it has invented a system by which, among other things, it can take the data, specifically your list of friends or your “social network,” and examine the credit ratings of those in your social network. The data is then used to provide information about YOU to lenders, presumably under the theory that “birds of a feather flock together.” If your friends collectively have a good credit rating, the lender might give you loan. If your friends collectively have a poor rating, the lender can close its file on your application. The point here that that lenders who might see some benefit in having data about your social network to judge the likelihood of your ability to pay the loan, or even your willingness to pay it back, will likely be paying Facebook (or any company that Facebook licenses) for the data.”

At the top of the list of magical thinkers I distrust are religious zealots followed closely by actuaries, the latter being too often engaged in devising some alchemical rationale to correlate, for instance, your choice of wardrobe with the insurance premium you should pay.  And we should not be surprised at all — in fact we have already seen other evidence — that social media profiles can become part of your unintended resume, your medical history, your credit-worthiness, your insurability; in short, your worthiness to live among the haves instead of the have nots according to someone’s data-driven decision process.

Now it is possible, that Facebook and lenders will not be able to implement this patented system as described without running afoul of the Equal Credit Opportunity Act, but if adopted, how is this credit based on the company you keep not a potential digital-age means of helping the rich stay rich and the poor stay poor?  When an entity like an insurer, creditor, or potential employer wants to disenfranchise a type of person — black, gay, Mexican, women who have premarital sex! — they devise criteria to avoid direct conflict with anti-discrimination laws.  “We didn’t deny you that loan because you’re black, we denied it because your friends (who didn’t manage to escape the impoverished neighborhood you did) all have bad credit scores.”

So, it’s not hard to imagine a future with a variety of creative, actuarial schematics by which any individual or group may be disenfranchised simply because we have voluntarily made what we used to call “private life” a matter of public record. And because this is the new normal, perhaps we ought to be drawing new legal boundaries regarding personal information and discrimination, but that doesn’t seem to be the kind of reform any of the digital-age leaders want to talk about.

Posted in Digital Culture, Patents, Privacy | Tagged , , , | Leave a comment

Thoughts on the Workless Future

My apologies in advance for the length and nearly stream-of-consciousness nature of the following:

While there appears to be consensus that we are rapidly innovating our way toward a future without work — or at least work as we have known it — we find myriad predictions and theories as to what this actually means, most especially whether this future might be dystopian or utopian.  And as contemporary college grads are already discovering, technological innovation has not necessarily led to new opportunities for stable and meaningful (i.e. related to their educations) work.  To the contrary, according to various articles, technology has generally fostered segmentation and massive outsourcing of traditional jobs into motley part-time “gigs” that — often overqualified — people cobble together to make ends meet.  Job seekers having this experience might begin to identify with the musician who has been told to let recorded music sales go, but to embrace the “opportunity” to tour more, sell merchandise, teach, etc. via digital platforms. Because it’s not just music that’s been devalued — it’s everything.

As I say, there are a lot of opinions, including those that tell us not to worry.  Technology has disrupted business sectors before and led to dire predictions, and new business sectors have always arisen to replace what’s been lost.  But unlike analogies to buggy whips and Luddites, the digital-age challenge, so far, is not one in which a traditional good or service is replaced by a different good or service (i.e. Schumpeter’s creative destruction).  Instead, the economic story of the digital revolution is one in which there is continued demand for many of the same goods and services we’ve wanted and needed for more than a century, but which can be produced and/or delivered with a lot less human effort.  That’s a potentially transformative phenomenon and the reason economists and other observers are taking the idea of the end of work so seriously.

In his article, The sharing economy will be our undoing, Robert Reich writes, “It’s estimated that in five years over 40 percent of the American labor force will have uncertain work; in a decade, most of us.”  Reich proposes solutions like a universal basic income, about which more in a moment, but suffice to say the projection of a future America without work is that of a nation we would not recognize — socially, politically, or economically.  As Derek Thompson reminds us in his excellent and in-depth piece for The Atlantic, called A World Without Work, what humans do with their time isn’t just a financial question, it’s an existential one.  But before we leap to the future, we should look at the present. Thompson describes a possible, incremental transformation toward a workless future, which certainly reflects the contemporary market described by other observers …

“What does the ‘end of work’ mean, exactly? It does not mean the imminence of total unemployment, nor is the United States remotely likely to face, say, 30 or 50 percent unemployment within the next decade. Rather, technology could exert a slow but continual downward pressure on the value and availability of work—that is, on wages and on the share of prime-age workers with full-time jobs. Eventually, by degrees, that could create a new normal, where the expectation that work will be a central feature of adult life dissipates for a significant portion of society.”

Sound familiar?  So, without getting too conspiracy-theory about it, I think it’s worth asking to what extent the owner/architects of our digital present are consciously presuming to be the authors of the future of our relationship to work itself. Because the major corporations in Silicon Valley, most prominently Google, are in constant conflict with legal frameworks — IP protection, anti-trust regulations, privacy protections, and even labor rights — around the world.  Naturally, I focus a lot on IP and the interests of creators, but the tech industry’s underlying argument against many of these legal systems is generally the same, somewhat vague assertion that they “stifle innovation.”  In the U.S., this argument has been the basis of much testimony before the House Judiciary Committee seeking comments on copyright review; it is the common thread among pro-tech pundits and industry-backed communications; it is a meme I see on Facebook lately,  stumping for the passage of a patent reform bill (HR9) claiming to “support” entrepreneurial inventors despite the fact this reform actually favors the dominance of big tech companies over smaller, independent innovators.

So, if we widen the lens a bit and think of IP rights as a kind of labor right that is vested in a specific expression or invention, we might recognize that big-tech capitalists represented by companies like Amazon, Uber, and Google appear to be politically engaged in a process of unravelling many early 20th-century, legal frameworks that were created to balance power between labor and capital and/or to bust monopolies.  Meanwhile, Thompson makes clear — and as others have pointed out — these businesses themselves are not major job creators.

In 1964, the nation’s most valuable company, AT&T, was worth $267 billion in today’s dollars and employed 758,611 people. Today’s telecommunications giant, Google, is worth $370 billion but has only about 55,000 employees—less than a tenth the size of AT&T’s workforce in its heyday.

Oddly enough, the assertion that these labor and competition-based legal frameworks have become, in our times, “barriers to innovation” has seeped into the public consciousness, and at least some people have bought the conclusion on face value without really considering what innovation ought to mean for us.  After all, innovation cannot so broadly be defined as any bit of software that provides entertainment, diversion, convenience, or communication (all of which is fine) because technology that is truly innovative should have a transformative market effect that spawns new economic opportunity for large segments of the workforce. But in general, the trend appears to be going the other direction, with technology destroying more opportunity than it is creating.  Hence, the fact that the innovation rationale to change public policy has been so widely accepted is rather paradoxical given that it is the digital natives who are the first workers to experience this “uncertain” market we seem to have innovated into existence.   Then to twist the paradox a bit further, the techno-centric commentary currently preaches out of both sides of its philosophical mouth — proclaiming Schumpeter’s creative destruction one moment and a utopian, Keynesian, future of leisure the next. As Thompson writes regarding the Schumpeter view …

Technology creates some jobs too, but the creative half of creative destruction is easily overstated. Nine out of 10 workers today are in occupations that existed 100 years ago, and just 5 percent of the jobs generated between 1993 and 2013 came from ‘high tech’ sectors like computing, software, and telecommunications. Our newest industries tend to be the most labor-efficient: they just don’t require many people. It is for precisely this reason that the economic historian Robert Skidelsky, comparing the exponential growth in computing power with the less-than-exponential growth in job complexity, has said, ‘Sooner or later, we will run out of jobs.’”

So, as a sop to creative destruction, bullish pundits (e.g. Steven Johnson in his recently controversial piece for the NYT) will point to anecdotal evidence of new opportunity, like the YouTube star making considerable personal wealth, or the artisan entrepreneur making at least part of her living selling crafts via Etsy or on her own web platform. At the same time, we do see some migration of experienced professionals from entities like print media or ad agencies to new employment with social media platforms like Instagram.  And all of these are valid stories, but they do not necessarily scale to support the broader workforce. The new full-time tasks simply require fewer people; and it is fanciful to believe that the rest of the workforce could one day comprise millions of kitchen-table entrepreneurs and YouTube stars.  And so, the dystopian picture forms as more and more individuals do their best to piece together a combination of entrepreneurism and freelance “gigs” until the market proves this unsustainable, and people literally cannot afford to live. At least not in the economic system we have today.

When we turn to the Keynesian (i.e. the futurist) aspects of the conversation, the predictions become more interesting and more philosophical, but also more whimsical.  It was Keynes who predicted in 1930 that his grandkids would experience an American work week of 15 hours, leading not to deprivation, but to more leisure time made possible by a market of abundance.  And it is in this context that we typically examine the literal replacement of man with machine. One obvious example, of course, would be the universal adoption of driverless vehicles, which would obliterate the job most American men currently hold.  Thompson explores not only the economic impact of millions of laborers without work, but also some of the social and psychological assumptions made by sanguine academics (called post-workists) and futurists, who foresee several possible benefits of the new leisure society.

“…with the right government provisions, they believe, the end of wage labor will allow for a golden age of well-being. Hunnicutt [a post-workist] said he thinks colleges could reemerge as cultural centers rather than job-prep institutions. The word school, he pointed out, comes from skholē, the Greek word for ‘leisure.’ ‘We used to teach people to be free,’ he said. ‘Now we teach them to work.’”

I have to say that, speaking as someone who pursued a liberal arts education for its own sake — not as a job-training step — I am sympathetic to this idea. But this isn’t how most people approach education or work; and I don’t believe that social conditioning is the only reason. It seems more reasonable to assume, regardless of social conditions, that we’re all just wired a little differently. But this is a parenthetical observation.  Back to the larger point …

Assuming we address subsistence with mechanisms like a universal basic income — and this is a very big assumption — the utopian view predicts outcomes like more quality time with loved ones, and the opportunity to pursue personal interests, crafts, or arts.  It is certainly a nice picture — and nobody can deny that work in many areas of contemporary society can be dehumanizing, meaningless, and disenfranchising — but this utopian projection is one that begs many questions about human nature, and is somewhat typical of the liberal academic, who I think likes to assume that man, left to his own devices, will naturally become Henry David Thoreau.  But as Thompson points out, men in particular, who are rendered idle by underemployment, tend to get depressed and watch a lot of television; and the reason for this is not exclusively an escape from financial worry, but the rather more obvious fact that it is not human nature to be idle. Of course, neither does this mean that every human left under-employed by some external agency will naturally produce work of personal or social significance by way of self-motivation.

Some people thrive by waking up on Monday morning with a blank slate, but others find this lack of structure and direction stifling.  I’ve been a freelance, creative worker nearly my entire adult life; but I have plenty of friends and family, who don’t necessarily love their jobs, but who shudder to consider the uncertainty of my professional life.  And they’re right.  Any freedom that comes with this way of making a living is almost always constrained by the anxiety of not knowing what’s next and the resulting pressure to keep working, well past the point of sanity sometimes.

In this post from February 2013, I referred to historian Bill Bryson’s book At Home, in which he describes how the English clergy system of the late 18th to mid 19th centuries produced a bounty of creative and inventive works. Because the English by that time were not particularly zealous about their religious practice, the clergy became a class of highly-educated, financially sustained men with a great deal of time on their hands. As a result, many of these individuals — or sometimes members of their families — produced seminal works in science, economics, arts, and other disciplines.  And I mention this because it seems as though the utopian view of a workless future-America foresees something akin to this prolific English idyll. But these predictions probably overlook the fact that this unique strata of clergy rested upon a thriving agricultural economy; and of course, it was only some members of this semi-idle class who devoted their time and gifts to such valuable pursuits.  Presumably, the majority of these vicars and rectors did whatever the 19th century equivalent was of loafing in front the television.

On a related note, who will be producing leisure-time entertainment like television in this workless future?  Maybe everyone is making TV in a sense and distributing it on YouTube; but then, why does YouTube still exist in this semi-workless future market?  Because at the moment, this video platform, which isn’t even yet profitable in today’s market, is entirely supported by advertising.  So, in a market as radically transformed as a leisure society would have to be, why would advertising look anything like it does right now? Would it even exist at all?

In fact, if we think about advertising, the whole conversation about certain types work being destroyed by technology kind of circles around to bite its own tail because the design of Internet we take for granted right now is pegged to the value of data about a population of employed, freely operating consumers making choices in a diverse, competitive market.  But it seems to me that either a utopian or dystopian future, where human labor has become less necessary, we would eventually see a market with fewer competitive producers vying for consumer attention.

In the utopian scenario, in which basic needs are successfully met through public mechanisms, consumer choice is unlikely to be the most efficient model. For instance, healthcare would have to be fully socialized, which means the the private interest in data mining us as consumers of health-related products and services would evaporate as a revenue stream for Web platforms built on advertising and data collection, which is to say, the Web as we know it.   So, even if a utopian future predicts leaving your hum-drum job, working part time, and perhaps sharing the poetry you always meant to write via social media platforms, what exactly is keeping that social media platform up and running?

In the dystopian scenario, in which huge segments of the population eke out an uncertain living by means of piecemeal work, consumers would not be able to afford so much diversity as we have right now. (Or markets might include local, barter exchanges among individuals.) But again the value of advertising and data collection would no longer be the economic basis for the existence of Web 2.0.  So, in either a utopian or dystopian future, what would Web 3.0 look like? What is the mechanism that keeps these expensive-to-run platforms in existence?  The government? Probably not. Or does a semi-workless future lead to a kind of digital-age feudalism that has nothing to do with the Web as we know it today?

Go back to the driverless vehicle prediction, which can only result — at best — in a small number of corporations investing in, and thus owning, ground transportation throughout the continent. And whoever owns transportation controls the distribution of food and just about every other product on which we all depend.  This begins to look a lot like the early 19th century, before the Hepburn Act, when the railroad owners exerted monopolistic control over shipping throughout the U.S.  Except, of course, in a semi-workless future, even with a per-capita subsidy like a universal basic income, it may not be possible to pay, for instance, Google back for the massive, stranded investment it made to build an automated transportation network of cars and trucks in the first place?  Maybe nothing can pay that back in actual money.  So, what do we call a society in which a handful of owners control certain basic systems and needs of the population, but where the population can no longer even pay for those services through a traditional market-based relationship?  What it looks like to me is a society of landlords and serfs.

On the specific subject of a universal basic income, I personally agree with Thompson that this baseline solution to subsistence may be politically impossible, and also socially undesirable.  He writes the following:

“When I think about the role that work plays in people’s self-esteem—particularly in America—the prospect of a no-work future seems hopeless. There is no universal basic income that can prevent the civic ruin of a country built on a handful of workers permanently subsidizing the idleness of tens of millions of people.”

This, to me, is the bottom line and just one reason why people may decide in various ways to reject a future without work. I believe what makes humans unique creatures — that which gives life meaning, purpose, and so lays a foundation for economic systems — is that we are problem solvers. And we’ve been that way since the first hominid shaped the first rock into a cutting tool.  In the broadest sense — whether the pursuit is curing disease, understanding the universe, designing a house, combating ecological disaster, feeding millions of people, plumbing and electrifying a neighborhood, or even making a movie about any of those things — all of this activity is an exercise in human problem-solving.  And this is why it matters when humans climb Everest or break world records at the Olympics or play guitar like Hendrix or violin like Perlman.

Ultimately, I suspect human effort and our relationship to it cannot in any real sense be turned over to machines without us becoming restless or suicidally bored.  Think about NASCAR.  Minimally, it’s just a bunch of machines moving in an ellipse, but if there were no humans pitting their technical and physical skills against one another, nobody would ever watch it again.  And sure, a robot band can play a Ramones song, but once the initial gee-wiz factor wears off, it’s about as exciting as watching a refrigerator keep milk cold. After listening to the robots play “Blitzkrieg Bop” a couple of times, isn’t the next human instinct to disassemble the machines and tinker with them?  Certainly, this was the fate of any number of toys I had as a kid once I became bored with their initial purpose.

Of course, one problem with trying to predict the future is that it’s relatively easy to theorize about one aspect of life in isolation, but nearly impossible to apply the chaos theory inherent to holistic change.  Even Thompson concludes his article on a singular, positive note by referring to a man who, at 60, is perusing his dream to be an educator — because the career path he might have followed was closed to him — but Thompson fails to mention that higher education is already in a state of financial (and even academic) crisis and is one of the many fields predicted to see substantial job loss due to technological disruption.  This is not unlike the narrow — and rather temporary — suggestion that creators can migrate to YouTube and share ad revenue, while ignoring the fact that web advertising is currently losing value and would naturally lose even more in a world with fewer working consumers.

My own assumption is that before any of this economic futurism turns into anything like substantive policy debate, unforeseen events may make decisions for us.  Climate change may precipitate some cataclysm, or there could be another 9/11-scale terrorist attack or some other prelude to war — be it strategically wise or stupid — that will significantly alter the course many presume we are on. Or we may simply reject any number of predicted automations before they become paradigmatic, not because we are Luddites, but because we still have consumer power and personal tastes. And so innovations like the driverless car or the non-human surgeon may prove as popular as Google glass.  After all, technological barriers are probably not the main reasons we still don’t all have jet packs.

Posted in Digital Culture, Economics | Tagged , , , , | 10 Comments

Steven Johnson & A Thesis That Isn’t

A feature story for this week’s New York Times Magazine is titled The Creative Apocalypse That Wasn’tIn the article, writer Steven Johnson concludes that neither the economic nor the cultural losses in the creative industries, which were predicted to result from the digital revolution, have come to pass.  Just as lesser pundits have previously declared in blogs and industry PR pieces, Johnson tells readers the picture is actually rosier than ever for both creators and consumers since the disruption known as Napster. And of course some of those lesser pundits (e.g. Bob Lefsetz) have been quick to leap onto Johnson’s coattails and say, “See?  Told ya so!”

Setting aside the exaggerated title of the article — I don’t remember anyone seriously predicting a “creative apocalypse” — Johnson has certainly made an effort to approach the question more scientifically and more complexly than most — or at least it will appear this way to readers based on his tone and some of the culturally-sensitive questions he rightly poses.  But the tricky thing about this article, as I see it, is that its main thesis is a bit of a moving target supported by a variety of statements that, in themselves, want far more complex analysis than Johnson either realizes or is willing to admit.

If we strip away some of the color and simply look at the assertions being made, then the basic structure of the article reveals an important fallacy.  First Johnson states that most of the evidence of harm done to creators in the digital age is anecdotal, and this is partly true — although anecdotes from professionals should not be misconstrued as mere random complaining.  So, to get beyond the anecdotal, Johnson then cites macroeconomic data, compiled by the Labor Department, most of which suggests a big-picture view that creative people in all media are doing better than they were a decade ago.  But, having previously scorned the anecdotal negative, Johnson then cherrypicks bits and pieces of the anecdotal positive — some of which he misrepresents — in order to support his interpretation of the economic data he cites.

Writers, like Paul Resnikof of Digital Music News, have previously commented on some of the flaws in the kind of data Johnson cites, demonstrating how big numbers can go up without necessarily benefiting the majority of workers in a particular sector.  It can be a bit like saying,  The good news is there are more jobs this quarter, when the hidden bad news is that an individual needs two of those jobs just to meet the cost of living.  Or to put it another way, if ten musical superstars were to generate eighty-billion in revenue, this does not mean most middle-class musicians will share the prosperity of an eighty-billion-dollar industry.  So, one must be careful with macroeconomic data, and Johnson acknowledges this, when he writes:

Could the surge in musicians be accompanied by a parallel expansion in the number of broke musicians? The income data suggests that this just isn’t true. According to the O.E.S., songwriters and music directors saw their average income rise by nearly 60 percent since 1999. The census version of the story, which includes self-­employed musicians, is less stellar: In 2012, musical groups and artists reported only 25 percent more in revenue than they did in 2002, which is basically treading water when you factor in inflation. And yet collectively, the figures seem to suggest that music, the creative field that has been most threatened by technological change, has become more profitable in the post-­Napster era — not for the music industry, of course, but for musicians themselves. Somehow the turbulence of the last 15 years seems to have created an economy in which more people than ever are writing and performing songs for a living.

But even if the numbers do lead to the conclusion Johnson wants us to draw — and it’s not clear that they do — I would point out that this is an example of the shifting thesis I referred to above.  Because these numbers, gathered from such a broad perspective, do not actually tell us anything about the effect the “digital economy” (good or bad) has had on the creative market, let alone offer any indicators as to what we might expect for the near future — and this is almost more important than where things stand right now.

At best, one might conclude that coincident with the development of Web 2.0, general revenues in the creative sectors have risen, but these data alone do not reveal any specific information that justifies dismissing all the anecdotal evidence from countless creators who are telling us that, in general, the threats outweigh the opportunities. And so, it is not surprising that, after presenting these revenue data, Johnson himself resorts to a litany of familiar, yet incomplete, anecdotes about all the good news out there.  But before addressing some of these, I’d like to return to the matter of thesis and remind ourselves what the underlying logic is behind the question that’s really being asked.

Johnson begins his article by citing Napster, which is arguably the beginning of piracy as we know it; and piracy was, and continues to be, foundational to many of the most dire predictions about the future of the creative economy.  But having set the stage with Napster (and a little dig at Lars Ulrich), Johnson then transitions to a broad analysis of “success” in the creative marketplace, essentially leaving piracy out of the discussion.  Hence, there is a little sleight of hand going on here because the argument he is supporting, either intentionally or not, is one often used to assert this general claim:  That although models based on creator ownership of works are outdated, there are ample new and better models available, if creators would just recognize them.  For instance, setting aside all the crazy, ideological bullshit about piracy, the economic argument that has been made, and which Johnson is fundamentally supporting, boils down to the following:

  • Yes, piracy created an expectation of free and very low-cost access, and it decimated actual sales of media to consumers. This also led to legal platforms like YouTube getting away with monetizing infringed works for years. But …
  • The same technologies that enabled piracy and YouTube-type, montetized infringement have also opened up unprecedented opportunities for creators to earn revenue from new streams. So …
  • Smart creators will seize these new opportunities and stop worrying about piracy and other infringements.  Because …
  • The concept of copyrights for creators is an anachronism being clung to solely by legacy industry, which is incapable of adapting to a future that is chockfull of the aforementioned opportunities for individual creators.

So, it seems to me that any economic analysis regarding the sustainability of creators should either prove or disprove this basic argument.  And I will assert that Johnson fails even to fully address this matter because his article consistently strays from analysis as to whether or not the piracy-to-freemium narrative has been harmful, neutral, or beneficial to the market; and whether or not digital technology and the design of Web 2.0 have legitimately spawned enough new opportunity to overcome the market shift away from selling media direct to consumers.

Because, anecdotal though it may be, plenty of seasoned and younger creators of every size and type can tell you that many of the “new opportunities” cited by outside observers are either not new; or they are second-tier substitutes for the revenue that has been lost; or they are simply other lines of business that have little or nothing to do with the core creative works.  Johnson is as guilty in this article as many tech-utopians, who like to point to what’s left for creators (e.g. touring for musicians), now that sales are gone.  Unfortunately, this is a bit like telling someone, Dude, I know they stole your car, but at least you still have your skateboard. 

Moreover, many of the “new” revenue streams to which Johnson refers are either precarious, unproven, or short-lived in many cases.  For instance, he cites the unprecedented opportunity for a musician to share ad revenue via YouTube, but he seems to have missed several recent memos that might indicate why this model may never be a sustainable driver.

The first of these memos would be Zöe Keating’s explanation of what it’s like for an indie musician to transition from happily earning revenue via YouTube’s Content ID system to the more recent offer-she-can’t-refuse known as the Music Key contract, which in fact voids the Content ID account of any musician who does not sign.  The second memo would be a case like that of Jack Douglas, whose satirical videos are supposed to earn him ad revenue on YouTube, but they don’t earn him anything when someone re-uploads his videos to Facebook.  (Not that it isn’t funny to watch Google lose revenue as an example of  its own anti-copyright agenda, but I digress.)  And perhaps the most compelling memo Johnson didn’t get would include recent reports that indicate Web advertising itself may be in serious crisis as a value proposition across the entire industry.  Each of these three topics is a complex conversation unto itself, which would qualify some of Johnson’s blue-sky conclusions. And these are just the first that come to mind in response to just one of his anecdotal examples of how well one class of creators is supposedly doing.

There are, of course, too many instances in this article in which Johnson merely rattles off highlights (e.g. the current golden age in TV production) as though we’re meant to conclude without looking at any details that the “digital economy” has either helped, or at least not harmed, a given sector.  And, unfortunately, as other outside commentators have done before, Johnson cites the “lower cost of production,” thanks to digital technology, as an opportunity to maximize revenues for creators.  But as I have pointed out in the past, while low-cost digital tools have lowered the barriers to entry for new creators, they have not necessarily lowered the cost of all production, depending on the medium and nature of the products.

Anyone who knows production — and this is most especially true for filmed entertainment — will tell you that the “digital technology lowers cost” talking point is a half-truth at best, and a rather cynical one because it overlooks the human effort, which hasn’t really changed all that much in most cases.  For instance, most movies and TV that people seem to love — not to mention justify stealing — is produced with thousands of hours of highly-skilled labor, the cost of which has nothing to do with the digital tools being used to create or distribute works.  And, in many cases, digital technology has actually increased those working hours rather than decreased them.

And I have to say that some of Johnson’s anecdotal evidence of good news are so tangential to his thesis as to be inscrutable, like when writes the following:

Think of that signature flourish of 2000s-­era television artistry: the exquisitely curated (and usually obscure) song that signals the transition from final shot to the rolling credits. Having a track featured during the credits of ‘‘Girls’’ or ‘‘Breaking Bad’’ or ‘‘True Blood’’ can be worth hundreds of thousands of dollars to a songwriter. (Before that point, the idea of licensing a popular song for the credits of a television series was almost unheard-­of.)

Surely, Johnson knows that licensing songs for filmed entertainment is not a new thing. And while it is true that the current, creative trend to synch various, thematically appropriate tracks to the end credits of shows is a nice opportunity for music creators, these deals have no direct relationship with the “digital economy” one way or another.  Hence, Johnson is creating a distraction by identifying a revenue stream — a licensing agreement — between film producers and music creators, which neither supports nor rejects his overall point.  If anything, he skips over the fact that the indirect impact of the free-media mindset has generally driven the value of creative works down to the extent that creators are continuously asked either to work — or allow the use of their works — for free or for very low rates.  If anything, this effect on the market would seem to shrink the opportunity for musicians to make a living by licensing works, rather than expand them.  But, again, my larger point is that each of the examples Johnson hauls into his net of good tidings begs far more in-depth analysis on a case-by-case basis.

Overall, though, Johnson seems to want to frame the discussion correctly as when he writes this:

The dystopian scenario, after all, isn’t about the death of the record business or Hollywood; it’s about the death of music or movies. As a society, what we most want to ensure is that the artists can prosper — not the record labels or studios or publishing conglomerates, but the writers, musicians, directors and actors themselves.

I agree.  Despite being accused by my haters of wanting “the studios to be all powerful,” I don’t personally care if legacy corporations survive in their present form or not.  But the Kool-Aid high Johnson seems to be on is one in which he can only see the short-term empowerment of some creators via these new technologies, but not the long-term, predatory nature of a brand new group of extraordinarily powerful, corporate masters. The MusicKey contract cited above is a clear example as to how YouTube could maneuver to become a monopsony for music streaming.  So, how is that possibility better than having a handful of legacy labels and publishers? Also, if Johnson really does care, as I do, about preserving cultural diversity, then I am truly confused by this strangely dismissive statement:

The growth of live music isn’t great news for the Brian Wilsons of the world, artists who would prefer to cloister themselves in the studio, endlessly tinkering with the recording process in pursuit of a masterpiece.

It’s the words prefer and endlessly that are both offensive and naive. Plenty of professional musicians have already commented on the “growth in live music” myth, but this statement about Brian Wilson stuck out for me because it can only be a blatant reference to the album Pet Sounds — a recorded work so universally acclaimed as innovative that George Martin credited Wilson with inspiring Sgt. Peppers Lonely Hearts Club Band.  Call me crazy, but  I think we do lose something if the market cannot support both a Taylor Swift, who makes millions of screaming teenagers happy, and a reclusive, even dysfunctional, genius, who creates a recorded work of unprecedented and lasting value. It should also be noted that creative envelope-pushers like Wilson have consistently been the forces behind the invention of new technologies themselves. Think ILM and Star Wars.

Ultimately, Johnson is supporting an anti-copyright — and even pro-piracy — argument; but he seems to want to have his Cake and eat John McCrea’s lunch, too. And I say this because so much of the evidence for prosperity he offers — both economic and anecdotal — is largely dependent upon the framework of copyright. So, after leading off with a thesis that fundamentally begs a question about the seeds of piracy (i.e. Did it hurt us?), he winds up painting some pretty pictures, but never quite answers the question because so much of the good news he alludes to is antithetical to a market that ignores, tolerates, or even extolls the permission-free use of creative works.  Because, as we see with examples like MusicKey or with the Internet industry’s willingness to monetize infringement while lobbying hard against creators’ rights, many creators themselves continue to discover that the Web giveth shortly before the Web taketh away.

Hence the a question Johnson should be asking is not exclusively what the picture looks like right now (even if that picture is accurate), but whether or not the Internet industry is helping to foster a sustainable environment, not only for creators, but for non-creative enterprises as well.  And although he is a much better writer than the tech-industry pundits, I don’t think he’s told a particularly compelling story just yet.

Posted in Copyright, Digital Culture, Economics | Tagged , , | 24 Comments

Google v Hood Not Even a B-Movie Drama

For someone who clearly doesn’t like Hollywood, Emily Hong, policy wonk for New America’s Open Technology Institute*, is determined to pitch an over-the-top narrative about AG Hood v Google that is so divorced from reality that I don’t think Luis Buñel would know what to make of it.  Reposted on Slate, her title and basic plot, which portrays Google as underdog — fighting not only for itself but for the sanctity of the Internet — against the juggernaut of the MPAA in cahoots with states attorneys general, begs the audience to suspend not only disbelief, but the verifiable evidence that Google’s reach into government is greater by orders of magnitude than several other whole industries.  So much so, that when faced with indictment in 2011 by the DOJ for its role in illegal pharmaceutical trafficking, the corporate executives of Google were able to buy themselves a non-prosecutorial settlement for the meager sum of a half billion dollars.  Thus, the focus of AG Hood’s recent investigation had been to confirm whether or not Google was in compliance with that settlement — i.e. that the company was not still knowingly profiting from illegal trade.

That the MPAA would have an interest in this investigation is no surprise, even though copyright infringement was among the least of AG Hood’s concerns.  Still, ever since the release of emails leaked during the Sony hack, Google and its translucent PR network have attempted to spin the intent of Hood’s investigation — which 39 other AGs have now joined — into a conspiracy story in which the MPAA was effectively calling the shots and using the AG as a puppet to pursue the studios’ interests.  Meanwhile, it should be noted that Google does not deny that the company plays a role in mass infringement (and even monetizes it), but that any measures it might take to mitigate the problem would unavoidably lead to a “less open” Internet.  And so, like any good/bad movie plot of this nature, why do the conspiratorial MPAA want AG Hood to pursue Google?  To take over the world, of course.  Only in this case, it’s more like take over the Internet in order to censor it.  Thus, in an homage to B-movie villains everywhere, Hong writes:

“Beyond its melodrama, Google v. Hood also embodies a deeper ideological clash that persists between those who believe that Internet content must now be technologically and legally controlled and those who argue that it remain as open as possible in the service of free expression. Organizations like the MPAA and its analogue in the music industry, the Recording Industry Association of America, advocate for strict control, while technology companies (many of whom are the online intermediaries who would likely bear the costs of any control regime) and civil liberties activists want to preserve an unhindered atmosphere.”

Believe what you want about the players, their actions, and their motives.  The details and  misrepresentations are so out of proportion now, it’s futile to even go there.  For the sake of argument, then, let’s assume companies are companies, all morally or amorally equal, if you will.  Yes, the motion picture industry would like to curb piracy; and yes, Google would like to avoid taking responsibility (financial or otherwise) for its role in the problem.  That’s business.  We get that.  But the idea that the story of Hood v Google is an epic tale of good vs evil — about the forces of openness vs the forces of censorship — is preposterous.

When Google ponied up its half-billion-dollar settlement and, in theory, stopped advertising against illegal drug trafficking, did you feel a chill in your right of free speech?  Or when millions of Americans applauded Reddit for its recent ban of the racist subreddit CoonTown, did you sense so much as a cool breeze warning you not to speak your mind on Facebook or Twitter, or to search for some news item somewhere on the Web?  And while an “open Internet” sounds like a good thing, it should not be taken as gospel that whatever puts an onus on Google will “close” the Internet.  To the contrary, we have seen in recent months both litigation and policy decisions in the U.S. and abroad, which demonstrate that Google and others can be forced to mitigate harm or conform to anti-trust regulations without affecting our rights in the slightest.  In fact, for the moment, I feel entirely free to say that I believe this narrative Google and writers like Emily Hong keep spinning is complete bullshit.  And I have to wonder if that freedom is honestly best served by just letting Google do whatever it wants with complete immunity.

*As stated in the article this organization includes Google executive chairman Eric Schmidt on its board.

Posted in Free Speech, Law & Policy | Tagged , , , | 1 Comment

Thumb War:  Sexual Revolution in the Digital Age

So, is the sexual revolution over?  If so, who won?

To be honest, it is very difficult to get a fix on the state of both social and political dynamics regarding sex and relationships in the millennial generation, especially through the frenetic, hand-held lens of social media. The general consensus appears to be that millennials are all about about hooking up without any interest in even trying to have relationships, but this may not be so true as it is widely reported. At the same time, stories emanating from college campuses would have us believe that the grandsons of the Boomers are generally more prone to abusive behaviors, including rape, than their fathers and grandfathers, which is an extraordinary indictment if this is true. On the other hand, we may be hearing more about various types of dysfunctional or misogynistic male behaviors because millennial women feel more empowered than their mothers and grandmothers to openly confront these issues, and this in itself will change the dynamics of relationships.  Then again, we see some new and classic schisms among feminist voices whereby one woman’s personal empowerment is another’s treasonous surrender to sexism and “rape culture” itself.   Meanwhile, a whole industry has grown up around date-rape prevention, with products like nail polish that can detect narcotics slipped into drinks. And that phenomenon brings us full-circle back to the aforementioned, apparent, trend toward consensual, casual sex because it begs the question as to why there would be an increase in terrible subterfuges like sneaking drugs into cocktails, if all parties are more open than ever before to casual encounters?  Or does the new casualness, which has been both enhanced and defined by smart-phone hook-up apps, actually foster more bad behaviors among men because digital dating has so thoroughly turned sex into a cold, dissociative transaction? These and other questions abound that I could not hope to address in a single essay.

Suffice to say, we read a lot of disparate and often conflicting reports from the battlefield; and according to a recent exposé by Nancy Jo Sales, writing for Vanity Fair, now that the sexual revolution has gone digital, millennials are ushering in what she calls the “dating apocalypse.”  Her article, which is largely based on conversations with 20-something men and women, who are actively hooking up through platforms like Tinder, conveys a broad narrative that sounds about as bleak and dysfunctional as every complaint I’ve ever heard about the pre-digital dating scene. The major difference today, simply sounds more bleak and more depressing at twice the speed thanks to technology.

From Sales’s description, the characterization we get of apps like Tinder, Hinge, and OKCupid is that, although these services are marketed with images of dreamy couples canoodling in romantic settings, they are not being used as dating services akin to or eHarmony, but rather as straight-up transaction brokers — personal pimp-yentas if you will — solely for the purpose of arranging casual sex with complete strangers.  “Dates” might last maybe an hour from door-to-door and back again, depending on traffic conditions. And according to some reports, many of the people on Tinder are either married or in supposedly committed relationships.

Tinder itself balked at Sales’s portrayal in a series of tweets like the one which reads, “It’s disappointing that @VanityFair thought that the tiny number of people you found for your article represent our entire global userbase.”  But whether or not the young singles Sales interviewed in various locations are a fair representation of Tinder’s reported 50-million users worldwide, the company’s reactionary tweets have largely been mocked for their petulant defensiveness.  Because it doesn’t really matter how many of Tinder’s users are similar to the people Sales profiled inasmuch as her research into the state of dating today is no less in-depth than precedent articles of its kind.  And indeed the characters and narratives that emerge are not only familiar, but also sound exactly like the kinds of stories we might expect to come from digitally-enhanced dating techniques.

Not surprisingly, Sales’s article suggests that plenty of young men are as hardwired for relationship-free sex as ever, while young women are still trying to figure out how they feel about these encounters, still grappling with basic inequalities between the sexes with regard to expectations and behavioral norms.  Even some of the women profiled, who say they’re perfectly comfortable with casual sex via these apps, seem to have expectations of basic behaviors from the men — like don’t go right back onto Tinder immediately after having sex with someone — about which the men sound rather typically oblivious.  Several of the women featured indicate that even basic courtesies are the exception rather than the rule, as though the theme has gone from “Will he call?” to “Will he even say goodbye on his way out the door?” So, not so much a new story about dating as the same old story that’s gotten a little bit worse.  And how could it not if hooking up via apps like these only exacerbates certain fundamental flaws in many of us men to begin with?

Speaking as a former young man myself, I feel comfortable generalizing that we’re not inherently a bright bunch when it comes to this stuff, and we are somewhat programmed to lack empathy at exactly same time that we become sexual creatures.  As a result, it often to takes young men some measure of getting hurt, and even causing hurt, before we actually start to figure out how to behave at all, let alone how to one day be in a real relationship.  And I suspect basic truths still hold that intimacy is just plain different for women, casual or not, when it comes to behaviors associated with intimacy. And so, in the digital dating scene, more people may be acting cool on the surface than ever before; but people have always acted cool on the surface in pre-digital encounters, and it has almost never been true that sex is so emotionally unencumbered.

Perhaps the most telling details that emerge from Sales’s investigation are those which imply that at lot of the casual sex being had out there isn’t any better today than it was before the invention of the smart-phone wing-man. And it might even be worse.  In fact, according to comments from one group of young women, it sounds as though breaking down barriers to casual encounters through apps yields at least as many, if not more, complaints of erectile dysfunction and consistent failures to produce sexually satisfying experiences (i.e. orgasms) for the women.  Again, if these complaints are representative of the larger experience, this is not at all surprising.  As with empathy and other emotional connections to another human being, there is usually a learning curve when it comes to partners having good sex rather than just some sex. And so it stands to reason that good sex isn’t going to happen very often — and quite possibly never — among people who merely “Hit it and quit it,” as the contemporary saying goes, according to Sales.

Yet another aspect of apps like Tinder is that, as interactive media unto themselves, they are essentially hand-held games, which presumably breed addictive, game-like habits with the added bonus of scoring “points” that feel emotionally empowering.  Swipe images of men or women you find attractive, and it must be kind of a thrill each time there’s a reciprocal match, like playing concentration with little prizes along the way for your ego.  As such, I would not be surprised if the consummation of “winning” this game — at least for many women and possibly for more men than might admit it — is frequently less satisfying than playing the game itself.  If this is true, it really is a shame.  After all, the highs and lows of dating have always been something of a game, but one played through a broad range of human interactions like conversation, body language, humor, flirting, and so on — all stimulating a range of emotional experiences that give our lives color, depth, and meaning.  One can hardly expect to consolidate all that down to a finger swipe across a glass screen and expect richer experiences to manifest as a result.

Admittedly, it is often difficult to compartmentalize the distinct but overlapping subjects of sex, sexism, sexualization, feminism, and even such extreme behaviors as sexual abuse.  One body of thought is that sexualization is borne exclusively of sexism, which can be the foundation for assault, or at least chronic callousness. My view has generally been that sexualization is unavoidable, but that as long as all parties are on equal terms, being open about sexualizing one another (in the right contexts) is probably healthier than sublimating these instincts.  This is because, generally speaking, in cultures where sex is shameful, women wind up as victims of abuse and scorn, sometimes quite horrifically.

And of course America — nation of rock-n-roll puritans that we are —  is unique in its brand of hypocrisies, which manifest in overt sexualization often filtered through the pretensions of religiously-based notions of decency.  Sexual double-standards are part of the American DNA, and these result both in hypocritical behaviors and hypocritical public policies about which we are still fighting nearly fifty years after the Summer of Love.  And so, on one level, one might expect the sexual frankness implicit in hook-up apps to be one means by which the next generation breaks down many of these longstanding hypocrisies, theoretically putting men and women on more equal footing than ever. In this sense, we would expect the millennials to be enjoying the freest “love” since the free-love movement itself.  But according to Nancy Jo Sales’s article, it doesn’t sound as though this is quite the case.  Instead, it sounds like these apps, with their astounding billion-dollar valuations, may be just another Silicon Valley swindle, conning a whole generation into trading meaningful experiences for a gluttony of meaningless ones.

Posted in Digital Culture | Tagged , , , | Leave a comment