Thoughts on the Workless Future

My apologies in advance for the length and nearly stream-of-consciousness nature of the following:

While there appears to be consensus that we are rapidly innovating our way toward a future without work — or at least work as we have known it — we find myriad predictions and theories as to what this actually means, most especially whether this future might be dystopian or utopian.  And as contemporary college grads are already discovering, technological innovation has not necessarily led to new opportunities for stable and meaningful (i.e. related to their educations) work.  To the contrary, according to various articles, technology has generally fostered segmentation and massive outsourcing of traditional jobs into motley part-time “gigs” that — often overqualified — people cobble together to make ends meet.  Job seekers having this experience might begin to identify with the musician who has been told to let recorded music sales go, but to embrace the “opportunity” to tour more, sell merchandise, teach, etc. via digital platforms. Because it’s not just music that’s been devalued — it’s everything.

As I say, there are a lot of opinions, including those that tell us not to worry.  Technology has disrupted business sectors before and led to dire predictions, and new business sectors have always arisen to replace what’s been lost.  But unlike analogies to buggy whips and Luddites, the digital-age challenge, so far, is not one in which a traditional good or service is replaced by a different good or service (i.e. Schumpeter’s creative destruction).  Instead, the economic story of the digital revolution is one in which there is continued demand for many of the same goods and services we’ve wanted and needed for more than a century, but which can be produced and/or delivered with a lot less human effort.  That’s a potentially transformative phenomenon and the reason economists and other observers are taking the idea of the end of work so seriously.

In his article, The sharing economy will be our undoing, Robert Reich writes, “It’s estimated that in five years over 40 percent of the American labor force will have uncertain work; in a decade, most of us.”  Reich proposes solutions like a universal basic income, about which more in a moment, but suffice to say the projection of a future America without work is that of a nation we would not recognize — socially, politically, or economically.  As Derek Thompson reminds us in his excellent and in-depth piece for The Atlantic, called A World Without Work, what humans do with their time isn’t just a financial question, it’s an existential one.  But before we leap to the future, we should look at the present. Thompson describes a possible, incremental transformation toward a workless future, which certainly reflects the contemporary market described by other observers …

“What does the ‘end of work’ mean, exactly? It does not mean the imminence of total unemployment, nor is the United States remotely likely to face, say, 30 or 50 percent unemployment within the next decade. Rather, technology could exert a slow but continual downward pressure on the value and availability of work—that is, on wages and on the share of prime-age workers with full-time jobs. Eventually, by degrees, that could create a new normal, where the expectation that work will be a central feature of adult life dissipates for a significant portion of society.”

Sound familiar?  So, without getting too conspiracy-theory about it, I think it’s worth asking to what extent the owner/architects of our digital present are consciously presuming to be the authors of the future of our relationship to work itself. Because the major corporations in Silicon Valley, most prominently Google, are in constant conflict with legal frameworks — IP protection, anti-trust regulations, privacy protections, and even labor rights — around the world.  Naturally, I focus a lot on IP and the interests of creators, but the tech industry’s underlying argument against many of these legal systems is generally the same, somewhat vague assertion that they “stifle innovation.”  In the U.S., this argument has been the basis of much testimony before the House Judiciary Committee seeking comments on copyright review; it is the common thread among pro-tech pundits and industry-backed communications; it is a meme I see on Facebook lately,  stumping for the passage of a patent reform bill (HR9) claiming to “support” entrepreneurial inventors despite the fact this reform actually favors the dominance of big tech companies over smaller, independent innovators.

So, if we widen the lens a bit and think of IP rights as a kind of labor right that is vested in a specific expression or invention, we might recognize that big-tech capitalists represented by companies like Amazon, Uber, and Google appear to be politically engaged in a process of unravelling many early 20th-century, legal frameworks that were created to balance power between labor and capital and/or to bust monopolies.  Meanwhile, Thompson makes clear — and as others have pointed out — these businesses themselves are not major job creators.

In 1964, the nation’s most valuable company, AT&T, was worth $267 billion in today’s dollars and employed 758,611 people. Today’s telecommunications giant, Google, is worth $370 billion but has only about 55,000 employees—less than a tenth the size of AT&T’s workforce in its heyday.

Oddly enough, the assertion that these labor and competition-based legal frameworks have become, in our times, “barriers to innovation” has seeped into the public consciousness, and at least some people have bought the conclusion on face value without really considering what innovation ought to mean for us.  After all, innovation cannot so broadly be defined as any bit of software that provides entertainment, diversion, convenience, or communication (all of which is fine) because technology that is truly innovative should have a transformative market effect that spawns new economic opportunity for large segments of the workforce. But in general, the trend appears to be going the other direction, with technology destroying more opportunity than it is creating.  Hence, the fact that the innovation rationale to change public policy has been so widely accepted is rather paradoxical given that it is the digital natives who are the first workers to experience this “uncertain” market we seem to have innovated into existence.   Then to twist the paradox a bit further, the techno-centric commentary currently preaches out of both sides of its philosophical mouth — proclaiming Schumpeter’s creative destruction one moment and a utopian, Keynesian, future of leisure the next. As Thompson writes regarding the Schumpeter view …

Technology creates some jobs too, but the creative half of creative destruction is easily overstated. Nine out of 10 workers today are in occupations that existed 100 years ago, and just 5 percent of the jobs generated between 1993 and 2013 came from ‘high tech’ sectors like computing, software, and telecommunications. Our newest industries tend to be the most labor-efficient: they just don’t require many people. It is for precisely this reason that the economic historian Robert Skidelsky, comparing the exponential growth in computing power with the less-than-exponential growth in job complexity, has said, ‘Sooner or later, we will run out of jobs.’”

So, as a sop to creative destruction, bullish pundits (e.g. Steven Johnson in his recently controversial piece for the NYT) will point to anecdotal evidence of new opportunity, like the YouTube star making considerable personal wealth, or the artisan entrepreneur making at least part of her living selling crafts via Etsy or on her own web platform. At the same time, we do see some migration of experienced professionals from entities like print media or ad agencies to new employment with social media platforms like Instagram.  And all of these are valid stories, but they do not necessarily scale to support the broader workforce. The new full-time tasks simply require fewer people; and it is fanciful to believe that the rest of the workforce could one day comprise millions of kitchen-table entrepreneurs and YouTube stars.  And so, the dystopian picture forms as more and more individuals do their best to piece together a combination of entrepreneurism and freelance “gigs” until the market proves this unsustainable, and people literally cannot afford to live. At least not in the economic system we have today.

When we turn to the Keynesian (i.e. the futurist) aspects of the conversation, the predictions become more interesting and more philosophical, but also more whimsical.  It was Keynes who predicted in 1930 that his grandkids would experience an American work week of 15 hours, leading not to deprivation, but to more leisure time made possible by a market of abundance.  And it is in this context that we typically examine the literal replacement of man with machine. One obvious example, of course, would be the universal adoption of driverless vehicles, which would obliterate the job most American men currently hold.  Thompson explores not only the economic impact of millions of laborers without work, but also some of the social and psychological assumptions made by sanguine academics (called post-workists) and futurists, who foresee several possible benefits of the new leisure society.

“…with the right government provisions, they believe, the end of wage labor will allow for a golden age of well-being. Hunnicutt [a post-workist] said he thinks colleges could reemerge as cultural centers rather than job-prep institutions. The word school, he pointed out, comes from skholē, the Greek word for ‘leisure.’ ‘We used to teach people to be free,’ he said. ‘Now we teach them to work.’”

I have to say that, speaking as someone who pursued a liberal arts education for its own sake — not as a job-training step — I am sympathetic to this idea. But this isn’t how most people approach education or work; and I don’t believe that social conditioning is the only reason. It seems more reasonable to assume, regardless of social conditions, that we’re all just wired a little differently. But this is a parenthetical observation.  Back to the larger point …

Assuming we address subsistence with mechanisms like a universal basic income — and this is a very big assumption — the utopian view predicts outcomes like more quality time with loved ones, and the opportunity to pursue personal interests, crafts, or arts.  It is certainly a nice picture — and nobody can deny that work in many areas of contemporary society can be dehumanizing, meaningless, and disenfranchising — but this utopian projection is one that begs many questions about human nature, and is somewhat typical of the liberal academic, who I think likes to assume that man, left to his own devices, will naturally become Henry David Thoreau.  But as Thompson points out, men in particular, who are rendered idle by underemployment, tend to get depressed and watch a lot of television; and the reason for this is not exclusively an escape from financial worry, but the rather more obvious fact that it is not human nature to be idle. Of course, neither does this mean that every human left under-employed by some external agency will naturally produce work of personal or social significance by way of self-motivation.

Some people thrive by waking up on Monday morning with a blank slate, but others find this lack of structure and direction stifling.  I’ve been a freelance, creative worker nearly my entire adult life; but I have plenty of friends and family, who don’t necessarily love their jobs, but who shudder to consider the uncertainty of my professional life.  And they’re right.  Any freedom that comes with this way of making a living is almost always constrained by the anxiety of not knowing what’s next and the resulting pressure to keep working, well past the point of sanity sometimes.

In this post from February 2013, I referred to historian Bill Bryson’s book At Home, in which he describes how the English clergy system of the late 18th to mid 19th centuries produced a bounty of creative and inventive works. Because the English by that time were not particularly zealous about their religious practice, the clergy became a class of highly-educated, financially sustained men with a great deal of time on their hands. As a result, many of these individuals — or sometimes members of their families — produced seminal works in science, economics, arts, and other disciplines.  And I mention this because it seems as though the utopian view of a workless future-America foresees something akin to this prolific English idyll. But these predictions probably overlook the fact that this unique strata of clergy rested upon a thriving agricultural economy; and of course, it was only some members of this semi-idle class who devoted their time and gifts to such valuable pursuits.  Presumably, the majority of these vicars and rectors did whatever the 19th century equivalent was of loafing in front the television.

On a related note, who will be producing leisure-time entertainment like television in this workless future?  Maybe everyone is making TV in a sense and distributing it on YouTube; but then, why does YouTube still exist in this semi-workless future market?  Because at the moment, this video platform, which isn’t even yet profitable in today’s market, is entirely supported by advertising.  So, in a market as radically transformed as a leisure society would have to be, why would advertising look anything like it does right now? Would it even exist at all?

In fact, if we think about advertising, the whole conversation about certain types work being destroyed by technology kind of circles around to bite its own tail because the design of Internet we take for granted right now is pegged to the value of data about a population of employed, freely operating consumers making choices in a diverse, competitive market.  But it seems to me that either a utopian or dystopian future, where human labor has become less necessary, we would eventually see a market with fewer competitive producers vying for consumer attention.

In the utopian scenario, in which basic needs are successfully met through public mechanisms, consumer choice is unlikely to be the most efficient model. For instance, healthcare would have to be fully socialized, which means the the private interest in data mining us as consumers of health-related products and services would evaporate as a revenue stream for Web platforms built on advertising and data collection, which is to say, the Web as we know it.   So, even if a utopian future predicts leaving your hum-drum job, working part time, and perhaps sharing the poetry you always meant to write via social media platforms, what exactly is keeping that social media platform up and running?

In the dystopian scenario, in which huge segments of the population eke out an uncertain living by means of piecemeal work, consumers would not be able to afford so much diversity as we have right now. (Or markets might include local, barter exchanges among individuals.) But again the value of advertising and data collection would no longer be the economic basis for the existence of Web 2.0.  So, in either a utopian or dystopian future, what would Web 3.0 look like? What is the mechanism that keeps these expensive-to-run platforms in existence?  The government? Probably not. Or does a semi-workless future lead to a kind of digital-age feudalism that has nothing to do with the Web as we know it today?

Go back to the driverless vehicle prediction, which can only result — at best — in a small number of corporations investing in, and thus owning, ground transportation throughout the continent. And whoever owns transportation controls the distribution of food and just about every other product on which we all depend.  This begins to look a lot like the early 19th century, before the Hepburn Act, when the railroad owners exerted monopolistic control over shipping throughout the U.S.  Except, of course, in a semi-workless future, even with a per-capita subsidy like a universal basic income, it may not be possible to pay, for instance, Google back for the massive, stranded investment it made to build an automated transportation network of cars and trucks in the first place?  Maybe nothing can pay that back in actual money.  So, what do we call a society in which a handful of owners control certain basic systems and needs of the population, but where the population can no longer even pay for those services through a traditional market-based relationship?  What it looks like to me is a society of landlords and serfs.

On the specific subject of a universal basic income, I personally agree with Thompson that this baseline solution to subsistence may be politically impossible, and also socially undesirable.  He writes the following:

“When I think about the role that work plays in people’s self-esteem—particularly in America—the prospect of a no-work future seems hopeless. There is no universal basic income that can prevent the civic ruin of a country built on a handful of workers permanently subsidizing the idleness of tens of millions of people.”

This, to me, is the bottom line and just one reason why people may decide in various ways to reject a future without work. I believe what makes humans unique creatures — that which gives life meaning, purpose, and so lays a foundation for economic systems — is that we are problem solvers. And we’ve been that way since the first hominid shaped the first rock into a cutting tool.  In the broadest sense — whether the pursuit is curing disease, understanding the universe, designing a house, combating ecological disaster, feeding millions of people, plumbing and electrifying a neighborhood, or even making a movie about any of those things — all of this activity is an exercise in human problem-solving.  And this is why it matters when humans climb Everest or break world records at the Olympics or play guitar like Hendrix or violin like Perlman.

Ultimately, I suspect human effort and our relationship to it cannot in any real sense be turned over to machines without us becoming restless or suicidally bored.  Think about NASCAR.  Minimally, it’s just a bunch of machines moving in an ellipse, but if there were no humans pitting their technical and physical skills against one another, nobody would ever watch it again.  And sure, a robot band can play a Ramones song, but once the initial gee-wiz factor wears off, it’s about as exciting as watching a refrigerator keep milk cold. After listening to the robots play “Blitzkrieg Bop” a couple of times, isn’t the next human instinct to disassemble the machines and tinker with them?  Certainly, this was the fate of any number of toys I had as a kid once I became bored with their initial purpose.

Of course, one problem with trying to predict the future is that it’s relatively easy to theorize about one aspect of life in isolation, but nearly impossible to apply the chaos theory inherent to holistic change.  Even Thompson concludes his article on a singular, positive note by referring to a man who, at 60, is perusing his dream to be an educator — because the career path he might have followed was closed to him — but Thompson fails to mention that higher education is already in a state of financial (and even academic) crisis and is one of the many fields predicted to see substantial job loss due to technological disruption.  This is not unlike the narrow — and rather temporary — suggestion that creators can migrate to YouTube and share ad revenue, while ignoring the fact that web advertising is currently losing value and would naturally lose even more in a world with fewer working consumers.

My own assumption is that before any of this economic futurism turns into anything like substantive policy debate, unforeseen events may make decisions for us.  Climate change may precipitate some cataclysm, or there could be another 9/11-scale terrorist attack or some other prelude to war — be it strategically wise or stupid — that will significantly alter the course many presume we are on. Or we may simply reject any number of predicted automations before they become paradigmatic, not because we are Luddites, but because we still have consumer power and personal tastes. And so innovations like the driverless car or the non-human surgeon may prove as popular as Google glass.  After all, technological barriers are probably not the main reasons we still don’t all have jet packs.

David Newhoff
David is an author, communications professional, and copyright advocate. After more than 20 years providing creative services and consulting in corporate communications, he shifted his attention to law and policy, beginning with advocacy of copyright and the value of creative professionals to America’s economy, core principles, and culture.

Enjoy this blog? Please spread the word :)