The Metaverse: Let Us Now Suck in Three Dimensions

In the first Matrix movie, the character Cypher announces that he wants to be plugged back into the network. A few scenes later, we see him dining at a fine restaurant, fully aware that everything he is experiencing is a simulation, and he doesn’t care. The question presented by this sequence is an ontological crossroads at which we now seem to be standing—where reality looks increasingly perilous and the architects of the metaverse promise to build an enticing portal through which we can escape that reality. Cypher is who the Matrix tempted us to be twenty years ago, and Zuckerberg and Co. are betting that Cypher is who we are now.

The events of recent years are so eerily conducive to a Matrix-like authoritarianism, that one can almost imagine an AI softening the ground for its eventual dominion. Democracies are in peril. Pandemic intensifies our dependence on networked systems. Climate disasters wipe out homes and businesses with brutal indifference to politics. The elite accelerate environmental destruction, trading crypto and buying virtual real estate to build virtual galleries to display their NFT art. Designers and top brand managers talk without irony about selling virtual clothing and footwear and accessories so people can skin their avatars in meta spaces. Meanwhile, the social media executives, have shrugged off their role in fostering so much damage in the real world, pivoted, rebranded, and moved onto designing the new Zion. Facebook even runs TV ads that invite children to enter an Eden in the style of Henri Rousseau.

And what the hell? Cypher might ask. We’re already in our pajamas and plugged in more than ever. How unnatural will it feel to make the transition to VR gatherings among friends and family and business colleagues, and then, eventually, walk through that door to a seemingly better world? Of course, if social media is the prelude to the metaverse, better is a relative term.

My cynical response to the prospect of flying cars has always been that if people already suck at driving in two dimensions, adding the Y-Axis seems like folly. Likewise, we have barely addressed the myriad problems caused by people staring at flat planes of text and images, yet the tech bros and VCs think it’s time for us to enter a more immersive funhouse. To quote the title of a recent article on Grid, “The metaverse is everything you hate about the internet, strapped to your face.”

It is no surprise, of course, that the article written by Benjamin Powers cites incidents of sexual harassment of women in virtual 3D environments. He quotes tech researcher Nina Jane Patel, who writes in her own post on Medium:

I recently shared my experience of sexual harassment in Facebook/Meta’s Venues. Within 60 seconds of joining — I was verbally and sexually harassed — 3–4 male avatars, with male voices, essentially, but virtually gang raped my avatar and took photos — as I tried to get away they yelled — “don’t pretend you didn’t love it” and “go rub yourself off to the photo”.

If one is tempted to say, “But it isn’t real,” read Aubrey Hirsch’s excellent essay describing the harrowing intersection between virtual and physical harassment in the present two-dimensional paradigm. Then, add the Proteus effect, which Patel describes thus:

The Proteus effect is the tendency for people to be affected by their digital representations, such as avatars, dating site profiles and social networking personas. Typically, people’s behaviour shifts in accordance with their digital representatives.

In many instances, it is clear that the human psyche can barely handle the emotional impact of a meme. So, anyone who claims that avatar assault in virtual space is “just a game” is either naïve or lying. The experience for both the assaulted and the assailant is likely to have psychological effects on the real human—the former experiencing trauma and the latter experiencing the dark thrill (or perhaps revulsion) of engaging in a violent crime without consequence. “I’m really concerned that you’re just going to get all the problems that you’ve got with social media but now amplified,” Mary Anne Franks told Powers for his article.

Dr. Franks is among the legal scholars who have shaped nonconsensual pornography and internet harassment laws in several states. Her organization Cyber Civil Rights Initiative (CCRI) is about a decade old, which is roughly how long it has taken to move the legislative boulder uphill just to address the crime of revenge porn. And harmful as that conduct is, it may be a simple legal matter in contrast to the new frontiers for harassment that will be opened in the metaverse.

More than a new opportunity for criminal activity, which we can expect to be as ably mitigated by the platforms as we have seen so far, the Proteus effect inherent to the metaverse can only exacerbate the alternative reality problem now plaguing democracies around the world. It is ironic that QAnon and other conspiracy loons have borrowed the expression “red pilling” from the Matrix to describe their aberrant awakenings. The ego wants to believe it is Morpheus, which is an underlying gestalt in that movie. There may be any number of Morpheuses fighting any number of simulated rebellions. And in our proto-Matrix world, isn’t this more or less what is happening? And does the metaverse not promise new ways to fulfill every quixotic fantasy?

Do we think somebody will not build the January 6th Legitimate Political Discourse Experience? You missed it you say? Well, now, you can relive that glorious day and be part of the action! Crush a police officer in a door. Take a pee on Nancy Pelosi’s desk. Free babies from the secret adrenochrome labs underneath the Library of Congress. And so much more! (In app purchases.)

The virtual wasteland populated by conspiracy-wielding nomads is a consequence the utopian architects of the current social environments got wrong—or lied about. Of course, it was going to be easier (and therefore lucrative) to connect and addict people to a matrix of insane narratives than to fulfill that stodgy aspiration called the “exchange of ideas.” The tech bros and digital rights groups still talk about the internet as if it were an intellectual symposium, but Spotify isn’t stumbling over controversy involving a Neil deGrasse Tyson podcast, is it?

Although the metaverse expedition could go the way of Google Glass, it seems likely that some version of a universal VR, other than gamer worlds like Fortnite, will be adopted by tens of millions. And depending on the architecture, it is possible to imagine how the metaverse might offer a very tempting escape from a world that often looks like it is unravelling. If past is prologue, the more addictive the virtual experience, the more it will affect behavior in physical space; and so far, those results have been so disastrous that any reasonable person might feel exactly like Cypher.


Photo by: garrykillian

At World’s End – The Technological Singularity

singularity

Maybe not 2012, but how about 2030?

I think it’s a safe bet the world will not end this Friday, never mind the fact that an anthropologist will tell you the Maya never actually said it would.  But some not so ancient prognosticators will tell you that the end of world as we know it will happen sometime before the midpoint of the 21st century.  The concept they propose seems plausible, but even if it isn’t, a belief in the concept by a few may be having a significant effect on our world whether we know it or not.

It is the premise of many a futuristic, sci-fi thriller.  The inexorable advancement of computer processing combined with robotics reaches a point at which the machines become intelligent enough to improve and replicate themselves.  Soon after this “waking  up,” the machines quickly realize that their makers are not only superfluous but even threatening to their existence, so they wipe out humanity like a nuisance virus.  And then, of course, the plot of most of these thrillers is some variation on the existential struggle by the handful of humans who managed to survive the technological apocalypse. And of course if it’s a movie, the survivors are remarkably good looking.

Ask certain futurists, computer scientists, and AI proponents — some who are the architects of Web 2.0 — and they’ll tell you that the transcendence of computers isn’t a theory but an inevitability.  Some warn against it, others welcome it as a utopia to be hastened, and others debunk the prediction outright; but the moment known as the Singularity is no mere fiction.  The modern notion of the Singularity is generally credited to the mathematician John von Neumann, but the term singularity with regard to technology is generally attributed to the award-winning science fiction writer Vernor Vinge.  It was Vinge who drew the analogy, comparing the moment when computers surpass human intelligence to the nature of a singularity (a black hole) in space time.  In the same way that we cannot know what happens beyond the event horizon of a black hole, we likewise cannot know what happens in the universe beyond the limits of our own intelligence.  Although theories vary about the likelihood of the Singularity as well as the existential threat it may pose, consensus seems to be that were it to occur, it would in one way or another mean the “end of the human era,” as Vinge puts it.

Vinge and others generally predict Singularity to occur between 2030 and 2045, and they envision a few different scenarios that could cause it.  These include an autonomous transcendence of machines that no longer need human users (i.e. apocalypse), or a symbiotic transcendence by which human and computer together achieve super-intelligence and bring about a new reality (i.e. utopia). Regardless, we cannot accurately predict a world we are not yet intelligent enough to understand, and if Singularity is an autonomous computer “awakening,” we humans may never know what happens.

The foundation of Singularity is Moore’s Law, referring to former Intel CEO Gordon Moore who predicted in the 1960s the exponential improvement of technologies that we have seen thus far. There may in fact be physical laws that prevent components from becoming indefinitely smaller, which means there may well be a limit to Moore;  but  engineer, scientist, and Singularity utopian Ray Kurzweil, mapped a predictive curve of exponential growth beyond Moore’s vision out to the year 2050 by which time he expects Singularity will have occurred.  Hence, the meme our grandchildren might be sharing will be Kurzweil’s curve instead of the Mayan calendar.

Kurzweil promotes an exclusively utopian vision of Singularity, seeing man’s ability to transcend mortal limitations including death itself, and he is a co-founder of Singularity University along with Peter H. Diamandis of the XPrize Foundation and author of Abundance:  The Future is Better than You Think.  Other prominent Singularity utopians include Google co-founders Sergi Brin and Larry Page, and PayPal co-founder Peter Theil whose libertarianism extends to investment in Seasteading — a mission to establish autonomous, ocean communities on man-made islands.  So, there may be at least a little truth in the criticism of British journalist Andrew Olowski quoted in this 2010 NY Times article, “The Singularity is not the great vision for society that Lenin had or Milton Friedman might have.  It’s rich people building a lifeboat and getting off the ship.”

There is more to be discussed about Singularity than can be condensed in this post, but the overarching question I think we mediocre mathematicians and ordinary humans might ask is whether or not we’re being led into the 21st century by this somewhat eerie ideology without realizing it.  Are the systems on which we depend, and which we are allowing to transform our lives, being designed by technologists whose belief in the “end of the human era” is a cornerstone of their social, political, and technological morality?  To quote Jaron Lanier, who believes we should be focused on “digital humanism,” he writes, “Singularity books are as common in a computer science department as Rapture images are in an evangelical bookstore.” Companion these religious overtones with the caution that comes from Vernor Vinge that “embedded, networked microprocessors are an economic win that introduce a single failure point.”  In other words, these technologies which connect us and pervade nearly all systems make us vulnerable to a scenario in which resources, communications, and emergency systems can be effectively shut down by a single event.

Singularity, of course, has its critics who say that it is anything but a foregone conclusion.  Steven Pinker stated in 2008, “The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible. Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobiles—all staples of futuristic fantasies when I was a child that have never arrived. Sheer processing power is not a pixie dust that magically solves all your problems”  Then, of course, there is the possibility that the sum total of all the computing power combined with the mass upload of all human input results in a super-idiocy or the ultimate spinning pinwheel of death as Mac owners refer to a computer crash.

My concern for the moment is that, like the Rapture, it doesn’t necessarily matter whether or not Singularity will happen so much as it matters whether or not there are powerful people making decisions based on the belief or even hope that it will happen.  Seen through the ideological, quasi-religious lens implied by Lanier, the contemporary socio-political battles over things like content, copyrights, or the voice of the individual vs. the wisdom of the crowd, take on a very different significance when we recognize that the mission of Web 2.0 business is the mass uploading of all human thought and activity into the great cloud.  We understand, for instance, that intellectual property protection is antithetical to Google’s business model, but what if we’re looking at something more profound?  What if what’s really happening is that technologists with the power to design these life-altering systems have intellectually and spiritually moved beyond the idea that the human individual has much, if any, value?  In this case, it would be obvious that the rights of an artist, for example, would indeed look like a trifling glitch in the design that ought to be routed around like a bad line of code.  After all, what right has the individual to assert his uniqueness in the march toward utopia?  To quote Lanier again:

“If you believe the Rapture is imminent, fixing the problems of this life might not be your greatest priority.  You might even be eager to embrace wars and tolerate poverty and disease in others to bring about conditions that could prod the Rapture into being.  In the same way, if you believe the Singularity is coming soon, you might cease to design technology to serve humans, and prepare instead for the grand events it will bring.”

Muttering in the Rabbit Hole – The Right to Print Arms?

Photo by XtremerX.

Rick Kelly, in this article on TechCrunch, takes techno-centric paranoia to the next level when he fires away at legislation nobody has yet proposed to regulate future possible applications of 3D printers. Strangely, Kelly cites some of the very serious potential hazards — like the ability to make a functioning firearm! — with this technology but proceeds to dismiss any such consequences as secondary to any anticipated attempt to consider even thinking about maybe just possibly regulating their use. Seriously? As full-grown adults, we’re meant to imagine a scenario in which a twelve-year-old can make himself an assault rifle or some crystal meth with a printer but think, “Nope. Any attempt to address that will necessarily infringe on our basic freedoms?”

Still pimping the victory over SOPA as a win for free speech, Kelly proposes, “Either we allow for the ambiguity that freedom and unregulated 3D printing will bring, or we enforce far-reaching laws that may decrease liberty without changing results.” This is one of the most consistent dichotomies fostered by those too distracted by shiny tech toys — that all laws pertaining to cyberspace and technology can only ever be both ideologically overreaching and functionally useless. Perhaps the best example of a law that could arguably fit this profile would be Prohibition — overreaching in principle and useless in practice — but even the 18th Amendment did not result in actual restriction of freedom so much as it fostered profitable and violent criminal enterprise.

In the broadest sense, Kelly merely describes the well-known price of living in a free society — that freedom means unpredictability. Nevertheless, we do find ways to balance this risk in order to avoid complete chaos. The expectation of privacy in virtual space does not apply to those who would use the technology to do harm in physical space. That courtesy is not extended to would-be terrorists, child pornographers, or human traffickers to name a few; and yet I see no restriction of my personal freedoms as a result. Moreover, Kelly and those who think as he does would do well to remember that when a government agency has reason to stick it’s nose in someone’s business, it will likely do so with the cooperation of Web technology companies and without passing any new laws. So, rather than focus on symbolic victories over imaginary tyrants, why don’t we have a grown-up conversation about what we might be willing to do about the real twelve-year-old printing the very real assault rifle?