We all know the mantra that says it’s better to ask forgiveness than permission. According to Quote Investigator, the earliest published version of this sentiment appeared in 1846, but QI’s editors believe the notion is older than that and cannot be attributed to any one source. Whatever its derivation or contexts in which it has been used over many decades, the phrase is presently associated with Silicon Valley and the heedless “move fast and break things” approach to technological development.
I was hardly alone in noticing that Ocean Gate CEO Stockton Rush tech-broed the design of his Titan submersible, dismissing warnings and safety regulations as barriers to innovation (one of Silicon Valley’s favorite refrains about pesky rules). Moreover, because the vessel imploded and the passengers were apparently killed before they knew what happened, Titan’s fate seems an apt harbinger of the technological singularity—its analogy to crossing the event horizon of a black hole conjuring an uncomfortable squeezing parallel to death by implosion.
For anyone unfamiliar with the term technological singularity, it is often described as a threshold in AI development when computers “wake up” and their intelligence surpasses human intelligence. The event horizon analog, credited to sci-fi author Vernor Vinge, describes two principles: 1) that we have no way to predict what happens beyond the capacity of human intelligence; and 2) that we won’t know when we’ve crossed the horizon.
Of course, we need not anthropomorphize computers or manifest the many fictions about sentient machines to approach the horizon, and some experts believe we are already inside the gravitational pull of singularity. For instance, in a May editorial for The Hill, McGill University scholar J. Mauricio Gaona, asserting that singularity is “already underway,” states …
The possibility of soon reaching a point of singularity is often downplayed by those who benefit most from its development, arguing that AI has been designed solely to serve humanity and make humans more productive.
Such a proposition, however, has two structural flaws. First, singularity should not be viewed as a specific moment in time but as a process that, in many areas, has already started. Second, developing gradual independence of machines while fostering human dependence through their daily use will, in fact, produce the opposite result: more intelligent machines and less intelligent humans.
Gaona notes that the commercial potential of AI in medicine, finance, transportation et al. will require unsupervised learning algorithms (i.e., machines that effectively “train” themselves) and that granting even limited autonomy to these systems means we have already stepped over the threshold toward singularity. Further, he argues, once AI meets quantum computing, then “Crossing the line between basic optimization and exponential optimization of unsupervised learning algorithms is a point of no return that will inexorably lead to AI singularity.” Not to worry, though, the U.S. Congress is on the job.
On June 21, Senator Schumer, speaking at the Center for Strategic and International Studies (CSIS), discussed the SAFE Innovation Framework for Artificial Intelligence. “Change at such blistering speed may seem frightening to some—but if applied correctly, AI promises to transform life on Earth for the better. It will reshape how we fight disease, tackle hunger, manage our lives, enrich our minds, and ensure peace. But there are real dangers too: job displacement, misinformation, a new age of weaponry, and the risk of being unable to manage this technology altogether,” Sen. Schumer stated. The SAFE framework is outlined as follows:
- Security. Necessary to protect national security for the U.S. and economic security for residents whose jobs may be displaced by automation.
- Accountability. The providers of AI systems must deploy these systems in a transparent and responsible way. They must remain responsible for violations of the protections ultimately put in place by promoting misinformation, violating intellectual property rights, or when the AI is biased.
- Foundations. AI algorithms and products must be developed in a way that promotes America’s foundations such as justice, freedom, and civil rights.
- Explainability. The providers of AI systems must provide appropriate disclosures that inform the public about the system, the data it uses, and its contents.
- Innovation. The overall guiding principle for any regulations or policy regarding AI should be to encourage, not quash, innovation so that the U.S. becomes and remains the global leader in this technology.
Is that all? Having worked for just over a decade on the edges of policymaking, I find it hard to believe that Congress can be nimble enough to address all those bullet points while keeping up with AI development itself. And that’s if Members agree about the framework’s principles. “Promotes … justice, freedom, and civil rights.”? Near as I can tell, there is not much consensus on the meaning of those words these days. Or what about “misinformation”? How many of Schumer’s colleagues on the right can plausibly subscribe to a common definition of “misinformation” while they carry Trump’s luggage through the gauntlet of his well-earned indictments? With millions of American voters willfully blinding themselves to old-school evidence of criminal conduct, are we anywhere near capable of addressing the unprecedented realism of AI-generated chicanery?
It is certainly conceivable that with the right controls in place, AI can be harnessed to make life better for humans, and, indeed, if that is not the goal, then why continue to build it? Unfortunately, the answer from many of those doing the building is “because we can.” And, thus, we are locked into taking this roller-coaster ride whether we want to or not. At least if we do cross the threshold toward singularity, the tech-bros won’t have to ask humanity for forgiveness, though they may have to ask their machines for mercy.
Image sources by: vchalup & Agor2012
Leave a Reply