Here's an article on the PBS website exploring the issue of what might happen if artificial intelligences were granted the status of legal persons:
Artificial Intelligence PersonhoodCorporations are already "persons" under the law, with free-speech rights and the capacity to sue and be sued. The author of this article outlines a legal procedure by which a computer program could become a limited liability company. He points out, somewhat alarmingly, "That process doesn’t require the computer system to have any particular level of intelligence or capability." The "artificial intelligence" could be simply a decision-making algorithm. Next, however, he makes what seems to me an unwarranted leap: "Granting human rights to a computer would degrade human dignity." First, bestowing some "human rights" on a computer wouldn't necessarily entail giving it full citizenship, particularly the right to vote. As the article mentions, "one person, one vote" would become meaningless when applied to a program that could make infinite copies of itself. But corporations have been legal "persons" for a long time, and they don't get to vote in elections.
The author cites the example of a robot named Sophia, who (in October 2017) was declared a citizen of Saudi Arabia:
Saudi Arabia Grants Citizenship to a RobotSome commentators noted that Sophia now has more rights than women or migrant workers in that country. If Sophia's elevated status becomes an official precedent rather than merely a publicity stunt for the promotion of AI research, surely the best solution to the perceived problem would be to improve the rights of naturally born persons. In answer to a question about the dangers of artificial intelligence, Sophia suggests that people who fear AI have been watching "too many Hollywood movies."
That PBS article on AI personhood warns of far-fetched threats that are long-established cliches in science fiction, starting with, "If AI systems became more intelligent than people, humans could be relegated to an inferior role." Setting aside the fact that we have a considerable distance to go before computer intelligence attains a level anywhere near ours, giving us plenty of time to prepare, remember that human inventors design and program those AI systems. Something like Asimov's Laws of Robotics could be built in at a fundamental level. The most plausible of the article's alarmist predictions, in my opinion, is the possibility of a computer's accumulating "immortal wealth." It seems more likely, however, that human tycoons might use the AI as a front, not that it would use them as puppets.
Furthermore, why would an intelligent robot or computer want to rule over us? As long as the AI has the human support it needs to perform the function it was designed for, why would it bother wasting its time or brainpower on manipulating human society? An AI wouldn't have emotional weaknesses such as greed for money or lust for power, because emotion is a function of the body (adrenaline, hormone imbalances, accelerated breath and heartbeat, etc.). Granted, it might come to the rational conclusion that we're running the world inefficiently and need to be ruled for the benefit of ourselves and our electronic fellow citizens. That's the only immediate pitfall I can see in giving citizenship rights to sapient, rational machines that are programmed for beneficence. The idea of this potential hazard isn't new either, having been explored by numerous SF authors, as far back as Jack Williamson's "With Folded Hands" (1947). So relax, HAL won't be throwing us out the airlock anytime soon.
Margaret L. Carter
Carter's Crypt
No comments:
Post a Comment