Why the Wrong Picture Matters

On Friday last week,  a Q&A appeared on The New York Times website between journalist James Estrin and photographer Ami Vitale.  The story pertains to the now widely recognized hashtag campaign #BringBackOurGirls, meant to raise awareness and perhaps pressure officials in our own countries to do everything possible to rescue nearly 300 schoolgirls kidnapped by Islamic terrorists in Nigeria.  At issue are three photographs of young, African women that have to a great extent become the faces of the campaign, spread throughout the Internet and featured on mainstream news broadcasts.  The problem is that these three photos, used without permission, were taken by Ms. Vitale as part of her documentary study of the society in Guinea-Bissau, a country located more than a thousand miles from Nigeria and whose residents have nothing to do with the victims of these kidnappings.

Vitale is angry for several good reasons, I think, not the least of which (if I may paraphrase) is that the appropriation of these images, even in the name of a cause as dire as the Nigerian situation, implies a tremendous cynicism about the civil liberties pertaining to the likenesses of the subjects; and in this case, is further aggravated by cultural insensitivity.  In other words, nobody’s image should be used without permission as though it were a generic stock photo — I know I’d be angry if my daughter’s picture were featured in an anti-sex-trafficking campaign without our permission — and this particular misrepresentation implies that faces can just be interchanged because, well, they’re African, and nobody around here really knows the difference.  Ironically, that homogenous view of Africa is an impression Vitale is seeking to contradict with this particular series of photos from Guinea-Bissau.  To quote:

“I wanted to put a human face on conflict. But when I got there my story changed. Because I realized the way Africa is generally portrayed in mainstream media is either wars, famine or stories like this terrible abduction. You see the horrors or the other extreme, beautiful safaris and exotic animals. There’s nothing in between.”

Photographs can, of course, be very powerful; but the power of a single image I believe is tied to the manner in which it becomes encoded into long-term memory rather than passing through short-term memory.  And the tendency now to gist our way through constant absorption of images through social media could well be turning us into short-term memory beings, who outsource long-term memory to the cloud.  Certainly, this would be consistent with some predictions coming from technologists who promote this modification as an enhancement to the human condition.  But if this is in fact the new reality, it seems to me, that when images like Vitale’s photographs are stripped of their legitimate context and applied to another context of tremendous gravity, that what’s being lost is anything but trivial.

No matter how this horrific story in Nigeria unfolds, doesn’t it matter if Vitale’s photos of the girls in Guinea-Bissau could theoretically become icons associated with a completely unrelated story?  Wouldn’t this betray the principles of journalism and all non-fiction storytelling?  Or does a hashtag campaign like #BringBackOurGirls exist as some collective activism similar to but separate from journalism in which the goal of awareness-raising is more important than the integrity of the story tied to a single image?  Personally, I don’t think so.

Cynical as it sounds, I think we have to admit that hashtag campaigns about highly complex and deadly serious issues have a somewhat contradictory nature.  On the one hand, there is a measure of practical and social value to the kind of global vigil being held at this moment; but on the other hand, sadly, the Nigerian kidnapping story is just what’s “trending” this month alongside celeb gossip and other bits of fluff.  In this recent article in The Daily Beast, terrorism expert Christopher Dickey suggests that when our momentary attention to this story wanes and the girls are very likely still captives, that what may well effect their release is the unsavory option of a large ransom and a slow negotiation with the devil.  Whether that’s the case, or intelligence services and special ops can locate and rescue these girls, the hashtag campaign is, to an extent, just something the rest of us do because we can do nothing.

Dickey’s assessment is based on several decades worth of knowing who the players are in global terrorist organizations and about the motives of individual actors.  And this relates the work of Ami Vitale in an important way.  When we all move on to the next story from the safe distance afforded by our devices, it’s the photographers and journalists and documentarians who stick around in places like Nigeria, Kenya, Somalia, and Guinea-Bissau so they can tell the rest of the story.  And it is essential that those stories be kept intact and not casually remixed, even with the best of intentions.

The Story “Transcendence” Didn’t Tell

Photo by agsandrew. istockphoto.com
Photo by agsandrew. istockphoto.com

WARNING:  There be spoilers here!

Despite the bad reviews, I had to go see the film Transcendence last weekend.  Given that its plot is based on certain theories pertaining to the technological singularity, how could I not go see it?  Indeed, it was very much not a good movie, and although film criticism is outside the editorial scope of this blog, the the story opportunity I think was missed is relevant for discussion here.

Johnny Depp plays Dr. Will Caster, a top computer scientist working in the field of AI (artificial intelligence) along with his wife, Rebecca, also a top computer scientist.  The adoring couple believe absolutely that human “transcendence” through symbiosis with computers into a newly evolved condition is a virtuous pursuit that can only benefit mankind.  Unfortunately for them, a group of hacker/terrorists led by one of Caster’s former students believes that advancing AI toward the technological singularity — the moment computing intelligence surpasses human intelligence and becomes self-aware — is a dangerous abomination.  At the start of the film, this underground group assassinates several leading AI researchers, and one of their operatives shoots Dr. Caster, which at first appears to have been a non-lethal grazing but is soon revealed to cause radiation poising from a polonium-tipped bullet.  With her husband having only weeks to live, Rebecca, along with the help of their colleague and dear friend Max, upload Caster’s mind into the core of their highly sophisticated computer and succeed in giving his consciousness new life.  Once Rebecca connects Caster to the Internet, he becomes omnipresent and nearly omniscient. And then the movie really starts to blow.

What unfortunately transpires after Caster’s transcendence is a stock action thriller complete with paramilitary personnel towing around a piece of WWII-era artillery for no particularly good reason.  By the time supercomputer Caster begins to “heal” sick and wounded people with nano-tech that tuns them super-human and immortal as long as they’re connected to the network, Rebecca finally catches on to the fact that she’s started something pretty dangerous.  Together with Max, the underground hackers, a smattering of federal agents, and the wise old scientist (played by Morgan Freeman of course), they determine that the only way to stop the conscious computer is to send in a virus.  This is ultimately accomplished when Rebecca volunteers to be infected with the virus and lets Caster upload her into the system.  Stopping Caster has the unfortunate side-effect of plunging the planet into darkness because, of course, they infect everything that is networked worldwide.

As my son and I left the theater, we joked about the fact that that film leaves us with the world “saved,” if we can call civilization reduced to a primitive sate and about to erupt in medieval chaos “saved.”  But that joke is exactly where I think the more interesting plot point was lost in the movie that got made.  The existential question asks which is the better choice:  to shut down all systems and let humanity try to rebuild civilization from the destruction that would surely follow, or to allow all living things to artificially evolve into a new state as networked entities with what might be described as kind of holographic consciousness and probably no free will? Would it even be humanity?

This is already a question for our times, if one is to take seriously the very real utopianism of AI scientists like Ray Kurzweil, presently the director of engineering at Google.  Plenty has been written about Kurzweil himself, his obsession with immortality underscoring a relentless pursuit in a lab that enables him to work at “Google scale,” as the offer was apparently put to him when the company courted his employment. AI research is no science fiction, and neither is the probability of singularity, but as theoretical physicist Stephen Hawking warns in an article published yesterday, nobody is really taking the implications of this inexorable march toward possible self-destruction very seriously.  Never at a loss for wit even when dealing with weighty subjects, Hawking writes, “If a superior alien civilisation sent us a message saying, “We’ll arrive in a few decades,” would we just reply, “OK, call us when you get here – we’ll leave the lights on”? Probably not – but this is more or less what is happening with AI.

Hawking warns unequivocally that, while AI could bring about some miraculous achievements in the  short term, that computers able to reprogram themselves, outwit financial markets, and even build weapons could very easily transcend human control and become the recipe of our sudden extinction.  Personally, I think there are enough hazards to be considered right now, including experiments with autonomous weapons that can decide who their targets are, and consolidated, corporate control of the research, data, and the agenda itself.  It seems to me people are just beginning to grapple with the implications of how much invasive data mining we’re allowing a company like Google to do, so how long will it take before anyone talks about the possible doomsday algorithms being tinkered with in its labs?  Cynically, I believe I know the answer to that question, and it will have something to do with whatever The Biebster is up to next week.

Anyone who reads this blog knows I write in defense of copyrights but not necessarily why.  It’s easy to get into debates and squabbles over the particulars of that body of law and to get caught up in what I believe to be a false debate over progress vs anti-progress.  I defend copyrights for the same reason I’m uncomfortable with drone warfare and don’t want to see autonomous weapons, even if they might make my own kid’s future job in the Navy less hazardous.  Copyrights, I believe, are merely one way in which we affirm that humans maintain dominion over their technology.  When we reduce our intimate thoughts, ideas, and creative expressions to the banality of data, we take a step closer toward abdicating that authority.

We should probably pay attention to anyone of Stephen Hawking’s stature, but I find his voice on this particular subject uniquely poignant.  After all, Hawking is probably about as close as any human has ever come to a life manifest as Descarte’s cogito ergo sum (“I think therefore I am”), existing almost entirely as a mind without a body, and most importantly, a mind blessed with the capacity to travel well beyond the boundaries that contain most of us mortal thinkers.  We are lucky to have had Hawking live as long as he has with a disease that was supposed to take his life many years ago.  I’ll stop short of calling him a prophet, but maybe somebody should at least report what he’s saying on the news or something.  Perhaps they could split the airtime for round-table discussion between the fate of Donald Sterling and the fate of all humanity.  In the meantime, Transcendence was indeed a box-office flop for Alcon Pictures, and from my point of view, it’s because the filmmakers let the interesting story go for the sake of a lot of boilerplate action sequences.  Maybe that in itself is a lesson.

The Amazon Effect

More than a decade ago, a book editor managing her own imprint at one of the big publishing houses gave me some insight into her world that I’ll never forget.  “I have to publish about five diet books,” she told me, “in order to invest in one new novelist.”  It’s important to understand that this is not a comment on the publishing industry but rather a comment on the book-buying market.  Like it or not, the number of people who want to purchase serious literature and non-fiction is considerably smaller than the number of people who want to buy self-help books, diet books, and pulp fiction.  And there’s nothing inherently wrong with suppliers delivering the products people want, but when it comes to products like books (as it is with music and filmed works), the healthiest market overall is one that sustains the greatest diversity of material, which is not necessarily the same thing as the greatest number of works.  This is a distinction I suspect the algorithmically-minded folks at Amazon may not understand, or care to understand; and this leads to the question of what effect the distribution leviathan will continue to have on publishing and literature going forward as well as what the company represents to the overall economy.

This past February, George Packer published a detailed examination of Amazon in The New Yorker under the subtitle Amazon is good for customers.  But is it good for books?  Packer covers so much ground, some of it rather startling, that the article is hard to summarize, and I strongly suggest reading it if you haven’t.  Probably the most striking revelation in the piece is the manner in which Amazon pushed the concept of “co-op marketing” fees, money a publisher would spend at a brick an mortar store like Barnes & Noble for a prominent display of a new book, to something reminiscent of an old-fashioned shakedown with a digital spin.  According to accounts cited in Packer’s article, it was pay the fees to Amazon or watch the “Buy” buttons disappear from your products, meaning browsers literally could not purchase the books on the site.  You can almost imagine the heavy saying something like,  “Youze got a nice collection of novels here. I wouldn’t want to see anything happen to ‘em.”

To my mind, the techo-utopianism exemplified by Amazon — and uniquely by Amazon because of the way the business is both web-based and operates in physical space — is based on two illusions, one that is probably hazardous economically, and another that is probably hazardous culturally.  The economic implications are relatively easy to recognize in that we’re seeing the Wal-Martization of every line of business represented by the things Amazon delivers — and Amazon delivers everything.  The illusion for the consumer is that we get low prices and convenience; but the hidden, long-term cost may well be the jobs that enable us to buy stuff in the first place.  This vicious, downward cycle is very neatly summed up in this 2005 JibJab spoof. It depicts a man enjoying low prices at “BigBox Mart,” losing his job at a supplier due to pricing pressures by “BigBox Mart,” then having no recourse other than to work for “BigBox Mart” well below his qualifications and at some fraction of his previous earnings.

Personalize funny videos and birthday eCards at JibJab!

An economy is an ecosystem, and just as the principle of biodiversity teaches us that a whole species cannot be eradicated without threatening other species, I suspect the same can be said for certain organisms within a free-market economy.  Sure, those who stand to gain will talk about creative destruction and technological progress, but when the products or labors being artificially devalued still have real value (i.e. market demand), that’s not creative destruction; it’s just destruction without creating anything new to replace what’s been lost. Like Wal-Mart, the Amazon model doesn’t create anything; it is merely a distribution system, a contemporary railroad that can dictate the prices charged in every diner along its route.  Except this railroad has thousands of lines spanning in all directions, doling out cheap candy to the passengers and simultaneously reducing the value of labor in so many little towns along the way until eventually nobody can ride the train.  From the Packer article:

According to a recent study of U.S. Census data by the Institute for Local Self-Reliance, in Washington, brick-and-mortar retailers employ forty-seven people for every ten million dollars in revenue earned; Amazon employs fourteen.

Like his Silicon Valley brethren, Amazon CEO Jeff Bezos speaks with the confidence and arrogance of determinism, as though these dominant, even monopolistic, technology companies are manifestations of the only history that could have unfolded in the digital age.  “Amazon is not happening to bookselling. The future is happening to bookselling,” Bezos is quoted as saying in the Packer article.  And while it may be true that the publishing industry does cling to some antiquated practices, it’s a subtle but important sleight of semantics happening there when a wealthy corporation owner tells us that the manner in which his business operates was inevitable, ordained as it were by the natural order of our times.  Does this apply to the entire enterprise?  Are the transitory, non-union “pickers” hired to work in Amazon fulfillment centers in questionable conditions and for low wages an inevitability in this “future?” Because on the subject of antiquated practices, the notion that warehouse workers have to be treated like machines so that I can get a dollar off a luxury item like a book or a CD takes us back at least a century.  Does the future belong to people who make conscious choices, or is it already encoded by seven wizards who dwell in the sacred valley?

While the consumer is distracted by cheap commerce, the producers (authors) gaze at a different illusion — one that preaches self-reliance, a chance to connect directly with customers, and bypass the traditional, elitist “gatekeepers.”  This is music to the aspiring writer’s ears, particularly if he’s been turned down for publication by one of those gatekeepers in the publishing world; but more total manuscripts uploaded by more writers does not mean that more great works must inevitably be discovered or that more writers will make a living through digital sales.  “The digital market is awash with millions of barely edited titles, most of it dreck, while readers are being conditioned to think that books are worth as little as a sandwich,” writes Packer.

There’s a reason my editor friend referred to “investing” in an author, and it’s because the best stuff almost always comes from the healthy center of an industry, where experienced professionals have the resources to cultivate something the market doesn’t know it wants yet. The best stuff comes from high-risk bets.  It’s not too hard to sell a slightly scandalous S&M trilogy or mass-market paperbacks or diet books. But stewardship of the next Toni Morrison is hard and takes experience and real risk because that kind of literature just isn’t going to be as popular as 50 Shades of Grey.  And unfortunately, what is threatened by the devaluation of all works by a model like Amazon are the resources available to make those riskier investments.  Some people may call the curators of those bets elitist, but which is the preferable tastemaker — the agent or editor steeped in literature his whole life, or Amazon’s pay-to-play model for promoting a book?  Or worse, how about a bot swarm telling us how great or awful some new ebook is?  I say, bring on the elitists.

The promise says “your work will retail for less, but you have the potential to sell more and pocket a larger percentage of the sales than you would with a traditional publisher.”  This illusion is how the internet industry convinces people that these models are examples of creative destruction — that these new opportunities for authors are what’s being created to replace those jobs in publishing and book retail that are being wiped out. Interestingly enough, though, Packer’s article mentions that even Bezos’s own wife, an author, published her last novel with Knopf and not through Amazon Publishing.  Since it’s a safe bet MacKenzie Bezos knows where her next meal is coming from, why not give Amazon Publishing or even direct sales on its platform a go?  Maybe because book publishing is more complicated than the Amazon model says it is.  From the Packer article:

“Writing is being outsourced, because the only people who can afford to write books make money elsewhere—academics, rich people, celebrities,” Colin Robinson, a veteran publisher, said. “The real talent, the people who are writers because they happen to be really good at writing—they aren’t going to be able to afford to do it.”

It was inevitable that these companies, once they controlled the lines of distribution, would get into the business of production; and while it’s reasonable to expect that Amazon as publisher might partner with some great authors and strike good deals with them, what would Amazon be at that point other than another so-called gatekeeper?  More importantly, everything about the company’s business practices suggests they expect to be the only gatekeeper, which is why all this democratization talk is bullshit; it’s a hypnotic used to blind people to the fact that these companies are designed to devour whole industries and emerge as the only game in town.  That doesn’t sound like the promise of the information age to me.

What does sound like the promise of the information age to me is something akin to my long-time friend and colleague’s venture Bittersweet Editions. Marco North spent over two years developing an artist-centric, all-digital publishing entity. Modeled after a classic small press, Bittersweet and ventures like it are seeking a balance, looking to provide authors a choice between big, corporate publishing and getting lost in a sea of “content” on the Web.  The roles of editing, marketing, and connecting with the right audience are still relevant, still take labor and expertise, and still have value.  Just like any editor/publisher, the small press or small label or small film distributor makes an investment in works and cannot help but impose his own tastes in making selections.  Call this “gatekeeping” if you will, but it seems to me that the better vision for the future is one that fosters more independent gatekeepers rather than one big company with a master key.