Thursday, June 08, 2023

Existential Threat?

As you may have seen in the news lately, dozens of experts in artificial intelligence have supported a manifesto claiming AI could threaten the extinction of humanity:

AI Could Lead to Extinction

Some authorities, however, maintain that this fear is overblown and "a distraction from issues such as bias in systems that are already a problem" and other "near-term harms."

Considering the "prophecies of doom" in detail, we find that the less radically alarmist doom-sayers aren't talking about Skynet, HAL 9000, or even self-aware Asimovian robots circumventing the Three Laws to dominate their human creators. More immediately realistic warnings call attention to risks posed by such things as the "deep fake" programs Rowena discusses in her recent post. In the near future, we could see powerful AI "drive an exponential increase in the volume and spread of misinformation, thereby fracturing reality and eroding the public trust, and drive further inequality, particularly for those who remain on the wrong side of the digital divide."

On the other hand, a member of an e-mail list I subscribe to has written an essay maintaining that the real existential threat of advanced AI doesn't consist of openly scary threats, but irresistibly appealing cuteness:

Your Lovable AI Buddy

Suppose, in the near future, everyone has a personal AI assistant, more advanced and individually programmed than present day Alexa-type devices? Not only would this handheld, computerized friend keep track of your schedule and appointments, preorder meals from restaurants, play music and stream videos suited to your tastes, maybe even communicate with other people's AI buddies, etc., "It knows all about you, and it just wants to make you happy and help you enjoy your life. . . . It would be like a best friend who’s always there for you, and always there. And endlessly helpful." As he mentions, present-day technology could probably create a device like that now. And soon it would be able to look much more lifelike than current robots. Users would get emotionally attached to it, more so than with presently available lifelike toys. What could possibly be the downside of such an ever-present, "endlessly helpful" friend or pet?

Not so fast. If we're worried about hacking and misinformation now, think of how easily our hypothetical AI best friend could subtly shape our view of reality. At the will of its designers, it could nudge us toward certain political or social viewpoints. It could provide slanted, "carefully filtered" answers to sensitive questions. This development wouldn't require "a self-aware program, just one that seems to be friendly and is capable of conversation, or close enough." Building on its vast database of information collected from the internet and from interacting with its user, "It wouldn’t just be trained to emotionally connect with humans, it would be trained to emotionally manipulate humans."

In a society with a nearly ubiquitous as well as almost omniscient product like that, the disadvantaged folks "on the wrong side of the digital divide" who couldn't afford one might even be better off, at least in the sense of privacy and personal freedom.

Margaret L. Carter

Carter's Crypt

No comments:

Post a Comment