Get AI Wrong and There Will Be Nothing to Forgive

We all know the mantra that says it’s better to ask forgiveness than permission. According to Quote Investigator, the earliest published version of this sentiment appeared in 1846, but QI’s editors believe the notion is older than that and cannot be attributed to any one source. Whatever its derivation or contexts in which it has been used over many decades, the phrase is presently associated with Silicon Valley and the heedless “move fast and break things” approach to technological development.

I was hardly alone in noticing that Ocean Gate CEO Stockton Rush tech-broed the design of his Titan submersible, dismissing warnings and safety regulations as barriers to innovation (one of Silicon Valley’s favorite refrains about pesky rules). Moreover, because the vessel imploded and the passengers were apparently killed before they knew what happened, Titan’s fate seems an apt harbinger of the technological singularity—its analogy to crossing the event horizon of a black hole conjuring an uncomfortable squeezing parallel to death by implosion.

For anyone unfamiliar with the term technological singularity, it is often described as a threshold in AI development when computers “wake up” and their intelligence surpasses human intelligence. The event horizon analog, credited to sci-fi author Vernor Vinge, describes two principles: 1) that we have no way to predict what happens beyond the capacity of human intelligence; and 2) that we won’t know when we’ve crossed the horizon.

Of course, we need not anthropomorphize computers or manifest the many fictions about sentient machines to approach the horizon, and some experts believe we are already inside the gravitational pull of singularity. For instance, in a May editorial for The Hill, McGill University scholar J. Mauricio Gaona, asserting that singularity is “already underway,” states …

The possibility of soon reaching a point of singularity is often downplayed by those who benefit most from its development, arguing that AI has been designed solely to serve humanity and make humans more productive.

Such a proposition, however, has two structural flaws. First, singularity should not be viewed as a specific moment in time but as a process that, in many areas, has already started. Second, developing gradual independence of machines while fostering human dependence through their daily use will, in fact, produce the opposite result: more intelligent machines and less intelligent humans.  

Gaona notes that the commercial potential of AI in medicine, finance, transportation et al. will require unsupervised learning algorithms (i.e., machines that effectively “train” themselves) and that granting even limited autonomy to these systems means we have already stepped over the threshold toward singularity. Further, he argues, once AI meets quantum computing, then “Crossing the line between basic optimization and exponential optimization of unsupervised learning algorithms is a point of no return that will inexorably lead to AI singularity.” Not to worry, though, the U.S. Congress is on the job.

On June 21, Senator Schumer, speaking at the Center for Strategic and International Studies (CSIS), discussed the SAFE Innovation Framework for Artificial Intelligence. “Change at such blistering speed may seem frightening to some—but if applied correctly, AI promises to transform life on Earth for the better. It will reshape how we fight disease, tackle hunger, manage our lives, enrich our minds, and ensure peace. But there are real dangers too: job displacement, misinformation, a new age of weaponry, and the risk of being unable to manage this technology altogether,” Sen. Schumer stated. The SAFE framework is outlined as follows:

  • Security. Necessary to protect national security for the U.S. and economic security for residents whose jobs may be displaced by automation.
  • Accountability. The providers of AI systems must deploy these systems in a transparent and responsible way. They must remain responsible for violations of the protections ultimately put in place by promoting misinformation, violating intellectual property rights, or when the AI is biased.
  • Foundations. AI algorithms and products must be developed in a way that promotes America’s foundations such as justice, freedom, and civil rights.
  • Explainability. The providers of AI systems must provide appropriate disclosures that inform the public about the system, the data it uses, and its contents.
  • Innovation. The overall guiding principle for any regulations or policy regarding AI should be to encourage, not quash, innovation so that the U.S. becomes and remains the global leader in this technology.

Is that all? Having worked for just over a decade on the edges of policymaking, I find it hard to believe that Congress can be nimble enough to address all those bullet points while keeping up with AI development itself. And that’s if Members agree about the framework’s principles. “Promotes … justice, freedom, and civil rights.”? Near as I can tell, there is not much consensus on the meaning of those words these days. Or what about “misinformation”? How many of Schumer’s colleagues on the right can plausibly subscribe to a common definition of “misinformation” while they carry Trump’s luggage through the gauntlet of his well-earned indictments? With millions of American voters willfully blinding themselves to old-school evidence of criminal conduct, are we anywhere near capable of addressing the unprecedented realism of AI-generated chicanery?

It is certainly conceivable that with the right controls in place, AI can be harnessed to make life better for humans, and, indeed, if that is not the goal, then why continue to build it? Unfortunately, the answer from many of those doing the building is “because we can.” And, thus, we are locked into taking this roller-coaster ride whether we want to or not. At least if we do cross the threshold toward singularity, the tech-bros won’t have to ask humanity for forgiveness, though they may have to ask their machines for mercy.


Image sources by: vchalupAgor2012

DCA Reports High Incidence of Credit Card Fraud on Pirate Sites

Digital Citizens Alliance (DCA) released a new report yesterday with the eye-popping statistic that 72% of Americans who subscribe to pirate media sites experience incidences of credit card fraud compared to 18% prevalence of credit card fraud among those who do not subscribe to pirate sites. These data are based on a survey of 2,030 Americans, of which 1 in 3 reported watching some pirated content in the last year, and 1 in 10 reported subscribing to a pirate streaming service. The report titled Giving Pirate Site Operators Credit states …

… piracy was once primarily a headache for content creators, users of these sites now face significant risks. Piracy subscription services make an estimated $1 billion a year providing services to at least nine million U.S. households.

DCA’s findings indicate that around 6.5 million Americans who choose to access movies, TV shows, and games in this black market, have been targeted for credit card fraud as a direct result of their subscriptions. And although I say the stat is “eye-popping,” given the environment we’re talking about, perhaps the real surprise is that the rate of unauthorized credit card charges in this network isn’t closer to 100%. After all, it’s one thing when hackers steal credit card data from legit retailers et al., but subscribing to a pirate site is cutting out the middleman and giving credit card info directly to a network of hackers.

The shift to high-quality streaming a little over ten years ago created an opportunity for pirates to launch new platforms offering low-price subscriptions to “everything” because, of course, none of the material they’re streaming is legally obtained but is stored on pirate servers around the world. Just as other DCA reports have shown that among the hidden costs of this all-you-can-eat offer is a high probability of infection with life-altering malware, the likelihood of unauthorized charges to a credit card is apparently even greater. “Combined with our previous research highlighting the risks associated with free piracy apps and services, the situation becomes even clearer. The pursuit of pirated content is an inherently risky behavior that threatens the devices, wallets, and privacy of consumers,” says DCA executive director Tom Galvin in a press release accompanying the new study.

DCA Research Subscriptions Trigger Fraud Within Eleven Days

Prior to conducting its survey of American consumers, DCA researchers subscribed to 20 pirate sites using a new credit card obtained for the experiment. In less than two weeks, the fraudulent charges began to appear from China, Singapore, Hong Kong, and Lithuania, and within three-months, DCA’s card was targeted with $1,495 in executed and attempted unauthorized transactions. The largest attempted transaction was $850, which was stopped by fraud protection, and the largest approved charge was $244.78. Given the implied cost to credit card services to provide protection against such transactions, DCA’s first recommended remedy—that the payment processors terminate relationships with known pirate sites—seems like a no-brainer.

DCA also recommends that the Federal Trade Commission “take piracy more seriously” and prioritize warning Americans about the risks associated with pirate sites; it recommends more consumer protection group outreach on this issue; and it recommends that law enforcement more aggressively investigate pirate site operators, now armed with the 2020 amendment to the U.S. Copyright Act which elevated large-scale piracy by means of streaming from a misdemeanor to a felony. “Given that the piracy ecosystem is now a $2 billion industry, the Department of Justice should use that authority to target piracy operators,” the report states.

Personally, I would be curious to know something about the thinking of 9 million Americans who want cheap media streaming so badly that they’re willing to tolerate the high risk of credit card fraud and/or a dangerous malware attack. Of course, to DCA’s point, perhaps the majority of these subscribers don’t know how risky accessing these sites can be.


Photo source by: Wichayada57844

Podcast – Tech Designer Carla Diana

This year’s World IP Day theme celebrates Women and IP: Accelerating Innovation and Creativity, and for that reason as well as the fact that artificial intelligence dominates all topics these days, my guest for this episode is the highly innovative Carla Diana, whom I first interviewed in 2014.

Carla is a tech designer, author, and educator. She runs the 4D design program at the Cranbrook Academy of Art in Michigan; she is the lead designer at Diligent Robotics in Austin, Texas; and she is the author of dozens of articles and essays about technology and design. Her most recent book, published in 2021 by Harvard Business Review Press, is My Robot Gets Me: How Social Design Can Make New Products More Human. And we’ll talk about what that means, plus generative AI, driverless cars, ethics in technology, and at least one product I had not imagined was a thing.

Show Contents

  • 00:01:24 – Carla’s background.
  • 00:05:57 – Why good design is social.
  • 00:11:55 – Design modalities & thinking about consumers with disabilities.
  • 00:20:27 – That tech should not mimic human behavior.
  • 00:28:57 – On avoiding innovation for its own sake.
  • 00:36:07 – On ethics in technology.
  • 00:45:51 – Generative AI and the arts.
  • 01:00:55 – Tech solutions for tech problems (e.g. Glaze for visual artists).
  • 01:05:32 – Self-driving vehicles.
  • 01:09:30 – Economic & social implications of a driverless world.
  • 01:15:26 – Combining design and ethics.