Thursday, August 26, 2021

Can AI Be a Bad Influence?

In a computer language-learning experiment in 2016, a chat program designed to mimic the conversational style of teenage girls devolved into spewing racist and misogynistic rhetoric. Interaction with humans quickly corrupted an innocent bot, but could AI corrupt us, too?

AI's Influence Can Make Humans Less Moral

Here's a more detailed explanation (from 2016) of the Tay program and what happened when it was let loose on social media:

Twitter Taught Microsoft's AI Chatbot to Be a Racist

The Tay Twitter bot was designed to get "smarter" in the course of chatting with more and more users, thereby, it was hoped, "learning to engage people through 'casual and playful conversation'." Unfortunately, spammers apparently flooded it with poisonous messages, which it proceeded to imitate and amplify. If Tay was ordered, "Repeat after me," it obeyed, enabling anyone to put words in its virtual mouth. However, it also started producing racist, misogynistic, and just plain weird utterances spontaneously. This debacle raises questions such as "how are we going to teach AI using public data without incorporating the worst traits of humanity?"

The L.A. TIMES article linked above, with reference to the Tay episode as a springboard for discussion, explores this problem in more general terms. How can machines "make humans themselves less ethical?" Among other possible influences, AI can offer bad advice, which people have been noticed to follow as readily as they do online advice from live human beings; AI advice can "provide a justification to break ethical rules"; AI can act as a negative role model; it can be easily used for deceptive purposes; outsourcing ethically fraught decisions to algorithms can be dangerous. The article concludes that "whenever AI systems take over a new social role, new risks for corrupting human behavior will emerge."

This issue reminds me of Isaac Asimov's Three Laws of Robotics, especially since I've recently been rereading some of his robot-related fiction and essays. As you'll recall, the First Law states, "A robot may not injure a human being or, through inaction, allow a human being to come to harm." In one of Asimov's early stories, a robot learns to lie in order to tell people what they want to hear. As this machine perceives the problem of truth and lies, the revelation of distressing truths would cause humans emotional pain, and emotional harm is still harm. Could AI programs be taught to avoid causing emotional and ethical damage to their human users? The potential catch is that a computer intelligence can acquire ethical standards only by having them programmed in by human designers. As a familiar precept declares, "Garbage in, garbage out." Suppose programmers train an AI to regard the spreading of bizarre conspiracy theories as a vital means of protecting the public from danger?

It's a puzzlement.

Margaret L. Carter

Carter's Crypt

1 comment:

  1. This is a fascinating article, Margaret. Thanks for posting. The Terminator franchise assumes that AI are silently working to undermine human rule, not emulating our behavior, as your article suggests. A whole new slant, to be sure! I recognized a variation of the Asimov's quote from the Alien 2 movie, when the Bishop synthetic says, "It is impossible for me to harm or by omission of action, allow to be harmed, a human being."

    ReplyDelete