Thursday, September 03, 2015

Artificial Intelligence and Sentience

The September-October issue of SCIENTIFIC AMERICAN MIND includes an article titled, "When Computers Surpass Us." The new generation of AI differs from computers such as Deep Blue, the famous IBM chess-playing machine, in the way the software learns. These computers are taught by trial and error, similar to the learning processes of human and other organic brains. This method leads logically to the familiar SF prospect of machines with the ability to "self-improve by trial and error and by reprogramming their own code." Nick Bostrom, author of SUPERINTELLIGENCE: PATHS, DANGERS, STRATEGIES, contends that there is no reason why computer AI shouldn't eventually surpass human intelligence.

AI can be divided into "weak" or "narrow" and "strong" or general. Our technology has already achieved dazzling progress in narrow AI, computers "able to replicate specific human tasks," such as driverless cars and facial identification software. Some futurists, including Bostrom, believe we'll produce general AI, with the versatility, language comprehension, and learning capacity of a typical human brain, before the end of this century.

We might ask why we'd want to create a computer that thinks exactly like a human being, except as a research project. We already have a planet full of human thinkers. The advantage of computer intelligence is its ability to do things we can do only with difficulty or not at all, such as lightning-fast mathematical calculations or analysis of vast, complicated systems. Even if or when superintelligent computers are built or evolve, thinking like human beings but on a far higher level, would they be likely to threaten us? Presumably the original AI minds from which all others descend will be designed with some version of Isaac Asimov's Three Laws of Robotics. Machine intelligences, having no emotions, wouldn't experience greed, ambition, or hate unless programmed to have those drives.

Of course, as the SCIENTIFIC AMERICAN MIND article discusses, an AI constructed with completely benign motivations could still be dangerous, even a "weak" or "narrow" one. A narrow AI tasked with "maximizing return on investments" might decide a national or worldwide disaster would be the most efficient method of increasing the earnings of its designated businesses. A computer ordered to make people happy might fulfill that command by implanting electrodes in the pleasure centers of their brains. A "strong" or "general" AI single-mindedly motivated to maximize human welfare might accomplish that goal by reshaping the world in directions we don't expect or want. Remember Jack Williamson's novel about the robotic overlords who decided the optimal way to preserve human safety and happiness was to prevent human beings from doing much of anything?

Asimov's Three Laws can't solve the problem by themselves, since they're subject to complex difficulties of interpretation. The definitions of "human" and "harm" contain potential minefields. A robot must not harm a human being or allow a human being to come to harm through inaction. Suppose the only way to protect one person from harm is to hurt another, such as saving the victim of a would-be mugger? A robot must obey the orders of human beings (subject to the limitations of the first law). Does a robot have to obey a small child, a mentally disabled or deranged person, or a convicted criminal? Ambiguities such as these are why, in many of Asimov's stories, use of robots on Earth is forbidden, their employment being restricted to controlled environments. In one story, two superintelligent robots analyze the meaning of "humanity" and decide it should be defined by mental capacity, not physical form. Therefore, they conclude the two of them are the most "human" entities they know, and hence they don't have to obey anyone else.

Would a computer whose intelligence surpasses ours necessarily become conscious? Heinlein assumes so in works such as THE MOON IS A HARSH MISTRESS, with the self-aware lunar-wide computer system, Mike, and in TIME ENOUGH FOR LOVE, where the self-aware computer Minerva decides to have her consciousness transferred into a flesh body (as much as will fit in a human brain, anyway) so she can experience love.

Another SCIENTIFIC AMERICAN article, "Intelligence Without Sentience," addresses this question:

Intelligence Without Sentience

The author maintains that our assumption of an intrinsic connection between high intelligence and consciousness isn't valid. The AI systems described in this essay are intelligent in the sense that they learn and remember, yet they "have none of the behaviors we associate with consciousness." He bluntly declares, "They are zombies, acting in the world but doing so without any feeling, displaying a limited form of alien, cold intelligence." Many people might argue that this behavior is by definition not intelligent, simply automatic. That would be circular reasoning, though, since if we include awareness as part of the definition of intelligence, the question implied in the article's title has no meaning. The prospect of a superintelligent but non-sentient AI doesn't seem as dire to me as this author hints. Without self-awareness, the computer wouldn't have any selfish motives or irrational emotions to prevent it from acting in humanity's best interests.

With the reservation, again, that an AI might view our best interests quite differently from the way we do.

Margaret L. Carter

Carter's Crypt

No comments:

Post a Comment