Showing posts with label robots. Show all posts
Showing posts with label robots. Show all posts

Thursday, January 11, 2024

Robotic Companions

A robotic device called ElliQ, which functions as an AI "companion" for older people, is now available for purchase by the general public at a price of only $249.99 (plus a monthly subscription fee):

Companion Robot

As shown in the brief video on this page, "she" has a light-up bobble-head but no face. Her head turns and its light flickers in rhythm with her voice, which in my opinion is pleasant and soothing. The video describes her as "empathetic." From the description of the machine, it sounds to me like a more advanced incarnation of inanimate personal assistants similar to Alexa (although I can't say for sure because I've never used one). The bot can generate displays on what looks like the screen of a cell phone. ElliQ's makers claim she "can act as a proactive tool to combat loneliness, interacting with users in a variety of ways." She can remind people about health-related activities such as exercising and taking medicine, place video calls, order groceries, engage in games, tell jokes, play music or audiobooks, and take her owner on virtual "road trips," among other services. She can even initiate conversations by asking general questions.

Here's the manufacturer's site extolling the wonders of ElliQ:

ElliQ Product Page

They call her "the sidekick for healthier, happier aging" that "offers positive small talk and daily conversation with a unique, compassionate personality." One has to doubt the "unique" label for a mass-produced, pre-programmed companion, but she does look like fun to interact with. I can't help laughing, however, at the photo of ElliQ's screen greeting her owner with "Good morning, Dave." Haven't the creators of this ad seen 2001: A SPACE ODYSSEY? Or maybe they inserted the allusion deliberately? I visualize ElliQ locking the client in the house and stripping the premises of all potentially dangerous features.

Some people have reservations about devices of this kind, naturally. Critics express concerns that dependence on bots for elder care may be "alienating" and actually increase the negative effects of isolation and loneliness. On the other hand, in my opinion, if someone has to choose between an AI companion or nothing, wouldn't an AI be better?

I wonder why ElliQ doesn't have a face. Worries about the uncanny valley effect, maybe? I'd think she could be given animated eyes and mouth without getting close enough to a human appearance to become creepy.

If this AI were combined with existing machines that can move around and fetch objects autonomously, we'd have an appliance approaching the household servant robots of Heinlein's novel THE DOOR INTO SUMMER. That book envisioned such marvels existing in 1970, a wildly optimistic notion, alas. While I treasure my basic Roomba, it does nothing but clean carpets and isn't really autonomous. I'm not at all interested in flying cars, except in SF fiction or films. Can you imagine the two-dimensional, ground-based traffic problems we already live with expanded into three dimensions? Could the average driver be trusted with what amounts to a personal aircraft in a crowded urban environment? No flying car for me, thanks -- where's my cleaning robot?

Margaret L. Carter

Please explore love among the monsters at Carter's Crypt.

Thursday, June 08, 2023

Existential Threat?

As you may have seen in the news lately, dozens of experts in artificial intelligence have supported a manifesto claiming AI could threaten the extinction of humanity:

AI Could Lead to Extinction

Some authorities, however, maintain that this fear is overblown and "a distraction from issues such as bias in systems that are already a problem" and other "near-term harms."

Considering the "prophecies of doom" in detail, we find that the less radically alarmist doom-sayers aren't talking about Skynet, HAL 9000, or even self-aware Asimovian robots circumventing the Three Laws to dominate their human creators. More immediately realistic warnings call attention to risks posed by such things as the "deep fake" programs Rowena discusses in her recent post. In the near future, we could see powerful AI "drive an exponential increase in the volume and spread of misinformation, thereby fracturing reality and eroding the public trust, and drive further inequality, particularly for those who remain on the wrong side of the digital divide."

On the other hand, a member of an e-mail list I subscribe to has written an essay maintaining that the real existential threat of advanced AI doesn't consist of openly scary threats, but irresistibly appealing cuteness:

Your Lovable AI Buddy

Suppose, in the near future, everyone has a personal AI assistant, more advanced and individually programmed than present day Alexa-type devices? Not only would this handheld, computerized friend keep track of your schedule and appointments, preorder meals from restaurants, play music and stream videos suited to your tastes, maybe even communicate with other people's AI buddies, etc., "It knows all about you, and it just wants to make you happy and help you enjoy your life. . . . It would be like a best friend who’s always there for you, and always there. And endlessly helpful." As he mentions, present-day technology could probably create a device like that now. And soon it would be able to look much more lifelike than current robots. Users would get emotionally attached to it, more so than with presently available lifelike toys. What could possibly be the downside of such an ever-present, "endlessly helpful" friend or pet?

Not so fast. If we're worried about hacking and misinformation now, think of how easily our hypothetical AI best friend could subtly shape our view of reality. At the will of its designers, it could nudge us toward certain political or social viewpoints. It could provide slanted, "carefully filtered" answers to sensitive questions. This development wouldn't require "a self-aware program, just one that seems to be friendly and is capable of conversation, or close enough." Building on its vast database of information collected from the internet and from interacting with its user, "It wouldn’t just be trained to emotionally connect with humans, it would be trained to emotionally manipulate humans."

In a society with a nearly ubiquitous as well as almost omniscient product like that, the disadvantaged folks "on the wrong side of the digital divide" who couldn't afford one might even be better off, at least in the sense of privacy and personal freedom.

Margaret L. Carter

Carter's Crypt

Thursday, June 01, 2023

Brain-Computer Interface

Elon Musk's Neuralink Corporation is developing an implant intended to treat severe brain disorders and enable paralyzed patients to control devices remotely. As a long-term goal, the company envisions "human enhancement, sometimes called transhumanism."

Neuralink

Here's a brief article on the capacities and limitations of brain implants:

Brain Implants

A Wikipedia article on brain-computer interface technology, which goes back further than I'd realized:

Brain-Computer Interface

In fields such as treatments for paraplegics and quadriplegics, this technology shows promise. It "was first developed to help people paralyzed with spinal injuries or conditions like Locked-in syndrome — when a patient is fully conscious but can't move any part of the body except the eyes — to communicate." Connection between the brain's motor cortex and a computer has enabled a paralyzed patient to type 90 characters per minute. Another kind of implant allowed a man with a robotic hand to feel sensations as if he still had natural skin. A "brain-spine interface" has enabled a man with a spinal cord injury to walk naturally. Deep brain stimulation has been helping people with Parkinson's disease since the 1990s. Most of these applications, however, are still in the experimental stage with human patients or have been tested only on animals. For instance, a monkey fitted with a Neuralink learned to control a pong paddle with its mind.

Will such an implant eventually achieve telepathy, though, as Musk claims? Experts say no, at least not in the current stage of neuroscience, because "we don't really know where or how thoughts are stored in the brain. We can't read thoughts if we don't understand the neuroscience behind them."

What about a paralyzed person controlling a whole robotic body, like the protagonist of AVATAR remotely living in an alien body? Probably not anytime soon, but I was amazed to learn how much closer we are to achieving that phase of "transhumanism" than I'd imagined. If it's ever reached, might the very rich choose to live their later years remotely in beautiful, strong robotic bodies and thereby enjoy a form of eternal youth -- as long as their flesh brains can be kept alive, anyway?

Margaret L. Carter

Carter's Crypt

Thursday, April 13, 2023

How Will AI Transform Childhood?

According to columnist Tyler Cowen, "In the future, middle-class kids will learn from, play with and grow attached to their own personalized AI chatbots."

I read this essay in our local newspaper a couple of weeks ago. Unfortunately, I wasn't able to find the article on a site that didn't require registering for an account to read it. The essence of its claim is that "personalized AI chatbots" will someday, at a not too far distant time, become as ubiquitous as pets, with the advantage that they won't bite. Parents will be able to control access to content (until the kid learns to "break" the constraints or simply borrows a friend's less restricted device) and switch off the tablet-like handheld computers remotely. Children, Cowen predicts, will love these; they'll play the role of an ever-present imaginary friend that one can really interact with and get a response.

He envisions their being used for game play, virtual companionship, and private AI tutoring (e.g., learning foreign languages much cheaper than from classes or individual tutors) among other applications. I'm sure our own kids would have loved a device like this, if it had been available in their childhood. I probably would have, too, back when dinosaurs roamed the Earth and similar inventions were the wild-eyed, futuristic dreams of science fiction. If "parents are okay with it" (as he concedes at one point), the customized AI companion could be a great boon—with appropriate boundaries and precautions. For instance, what about the risks of hacking?

One thing that worries me, however, isn't even mentioned in the article (if I remember correctly from the paper copy I neglected to keep): The casual reference to "middle-class kids." The "digital divide" has already become a thing. Imagine the hardships imposed on students from low-income families, who couldn't afford home computers, by the remote learning requirements of the peak pandemic year. What will happen when an unexamined assumption develops that every child will have a personal chatbot device, just as many people and organizations, especially businesses and government offices, now seem to assume everybody has a computer and/or a smart phone? (It exasperates me when websites want to confirm my existence by sending me texts; I don't own a smart phone, don't text, and don't plan to start.) Not everybody does, including some who could easily afford them, such as my aunt, who's in her nineties. Those assumptions create a disadvantaged underclass, which could only become more marginalized and excluded in the case of children who don't belong to the cohort of "middle-class kids" apparently regarded as the norm. Will school districts provide free chatbot tablets for pupils whose families fall below a specified income level? With a guarantee of free replacement if the thing gets broken, lost, or stolen?

In other AI news, a Maryland author has self-published a horror book for children, SHADOWMAN, with assistance from the Midjourney image-generating software to create the illustrations:

Shadowman

In an interview quoted in a front-page article of the April 12,2023, Baltimore Sun, she explains that she used the program to produce art inspired by and in the style of Edward Gorey. As she puts it, "I created the illustrations, but I did not hand draw them." She's perfectly transparent about the way the images were created, and the pictures don't imitate any actual drawings by Gorey. The content of each illustration came from her. "One thing that's incredible about AI art," she says, "is that if you have a vision for what you're wanting to make it can go from your mind to being." And, as far as I know, imitating someone else's visual or verbal style isn't illegal or unethical; it's one way novice creators learn their craft. And yet. . . might this sort of thing, using software "trained" on the output of one particular creator, skate closer to plagiarism than some other uses of AI-generated prose and art?

Another AI story in recent news: Digidog, a robot police K-9 informally known as Spot, is being returned to active duty by the NYPD. The robot dog was introduced previously but shelved because some people considered it "creepy":

Robot Dog

Margaret L. Carter

Carter's Crypt

Thursday, December 09, 2021

Xenobots

Organic "robots" developed from frog cells have learned to reproduce:

Self-Replicating Robots

Formed from the stem cells of the African clawed frog, the "xenobots," as was revealed in 2020, "could move, work together in groups and self-heal." Now they've developed the ability to reproduce "in a way not seen in plants and animals." The article compares them to Pac-Man figures, and there's a video clip showing them in action. Although they have no practical use yet, eventually they may be capable of applications such as "collecting microplastics in the oceans, inspecting root systems, and regenerative medicine."

These nanobots problematize the definition of "life." Are they robots or organisms? Furthermore, they potentially raise the question of what constitutes intelligence. If intelligence means the ability to respond to environmental changes by adapting one's behavior, even plants and bacteria have it. If true intelligence requires sapience—consciousness—it may be restricted to us, some other higher primates, and a few cetaceans. But if intelligence mainly equals problem-solving, the xenobots do exhibit "plasticity and ability of cells to solve problems."

Margaret L. Carter

Carter's Crypt

Thursday, October 07, 2021

Astro the Robot

Amazon has invented a household robot called Astro, described as about the size of a small dog. It's "Alexa on wheels" but a bit more:

Amazon Robot

Astro can roll around the house with its camera, on a 42-inch arm, enabling you to keep an eye on children from another room. Or you can view your home remotely when you're away. You might use this feature to check on a vulnerable family member who lives alone. Like a tablet, it can play videos and access the internet. Like Alexa, it can answer questions. Its screen can be used for video chatting.

It can't navigate stairs, although (like the Roomba) it knows not to fall down them. Unfortunately, it can't pick up things. I suspect that ability will come along sooner or later. It can carry small objects from room to room, though, if a human user loads the objects, and facial recognition allows Astro to deliver its cargo to another person on command. It could be remotely commanded to take medication or a blood pressure cuff to that elderly relative who lives by herself.

Amazon's goal is for Astro to become a common household convenience within ten years. Even if you have $999 to spare, you can't order one right now. The device is being sold only to selected customers by invitation. Amazon's vice president of product says the robot wasn't named after the Jetsons' dog. The first possible origin for the name that occurred to me, however, was the robot Astro Boy, from a classic early anime series.

Considering the way people talk to their pets as if the animals can understand, I can easily imagine an owner carrying on conversations with Astro almost like an intelligently responsive housemate.

Margaret L. Carter

Carter's Crypt

Thursday, August 26, 2021

Can AI Be a Bad Influence?

In a computer language-learning experiment in 2016, a chat program designed to mimic the conversational style of teenage girls devolved into spewing racist and misogynistic rhetoric. Interaction with humans quickly corrupted an innocent bot, but could AI corrupt us, too?

AI's Influence Can Make Humans Less Moral

Here's a more detailed explanation (from 2016) of the Tay program and what happened when it was let loose on social media:

Twitter Taught Microsoft's AI Chatbot to Be a Racist

The Tay Twitter bot was designed to get "smarter" in the course of chatting with more and more users, thereby, it was hoped, "learning to engage people through 'casual and playful conversation'." Unfortunately, spammers apparently flooded it with poisonous messages, which it proceeded to imitate and amplify. If Tay was ordered, "Repeat after me," it obeyed, enabling anyone to put words in its virtual mouth. However, it also started producing racist, misogynistic, and just plain weird utterances spontaneously. This debacle raises questions such as "how are we going to teach AI using public data without incorporating the worst traits of humanity?"

The L.A. TIMES article linked above, with reference to the Tay episode as a springboard for discussion, explores this problem in more general terms. How can machines "make humans themselves less ethical?" Among other possible influences, AI can offer bad advice, which people have been noticed to follow as readily as they do online advice from live human beings; AI advice can "provide a justification to break ethical rules"; AI can act as a negative role model; it can be easily used for deceptive purposes; outsourcing ethically fraught decisions to algorithms can be dangerous. The article concludes that "whenever AI systems take over a new social role, new risks for corrupting human behavior will emerge."

This issue reminds me of Isaac Asimov's Three Laws of Robotics, especially since I've recently been rereading some of his robot-related fiction and essays. As you'll recall, the First Law states, "A robot may not injure a human being or, through inaction, allow a human being to come to harm." In one of Asimov's early stories, a robot learns to lie in order to tell people what they want to hear. As this machine perceives the problem of truth and lies, the revelation of distressing truths would cause humans emotional pain, and emotional harm is still harm. Could AI programs be taught to avoid causing emotional and ethical damage to their human users? The potential catch is that a computer intelligence can acquire ethical standards only by having them programmed in by human designers. As a familiar precept declares, "Garbage in, garbage out." Suppose programmers train an AI to regard the spreading of bizarre conspiracy theories as a vital means of protecting the public from danger?

It's a puzzlement.

Margaret L. Carter

Carter's Crypt

Thursday, August 05, 2021

RoboDogs

Is the public ready for a RoboDog on the police force? New York City, Honolulu, and the Dutch national police force have tried a robotic police dog nicknamed Spot, created by Boston Dynamics:

Useful Hounds or Dehumanizing Machines?

In connection with the COVID-19 pandemic, these automatons have scanned people for fevers and conducted remote interviews with positive-testing patients. In Belgium, one was sent to check the site of a drug lab explosion. Utlity companies can use them "to inspect high-voltage zones and other hazardous areas." They can also "monitor construction sites, mines and factories, equipped with whatever sensor is needed for the job." A representative of the manufacturer points out, "The first value that most people see in the robot is taking a person out of a hazardous situation.” On the negative side, some critics worry about weaponization of robots, especially under the control of the police. Another company, Ghost Robotics, has no qualms about providing similar robot dogs to the military. While Boston Dynamics tries to promote its product as friendly and helpful, some people worry about the potential for "killer robots" employed by police departments. The issue of human rights with regard to robot police dogs brings to mind Asimov's robot stories, with the Three Laws to limit the potential for harm, as well as governmental hyper-caution demonstrated by a prohibition against deploying robots on Earth.

An article exploring why Spot, renamed Digidog in New York, didn't work out well there:

The NYPD's Robot Dog

The design of the "dog," with its "very imposing profile," the way it moves, and the context of its use influenced the public's response to it. At a time when police departments were facing increased criticism about officers' interactions with civilians, Digidog was taken into a public housing project, where it exacerbated the "very big power imbalance that’s already there." It's proposed that the reaction to Digidog might have been more positive if people had seen it used for jobs such as bomb disposal or rescuing victims from fires. Also, science fiction has created stereotypical expectations of what robots are and how they function, ideas both positive and negative.

I find these machines a little disappointing because they don't live up to my idea of a true robot. The animatronic hounds can't act on their own. At most, when ordered to move in a particular direction, they can navigate stairs or rough terrain without being micromanaged. Spot can act autonomously "only if it’s already memorized an assigned route and there aren’t too many surprise obstacles," a long way from science-fiction robots that can receive broad commands and carry out all the necessary steps without further guidance. Also, the robot "hounds" don't look much like real dogs. Why weren't they given a canine appearance, with fur as well as other animal-like features? Wouldn't people accept them more readily if they were cute? Maybe, as hinted in the article linked above, that was part of the problem with their failure in New York. Surely they could be made more pet-like without falling into the uncanny valley of "too" realistic.

Margaret L. Carter

Carter's Crypt

Thursday, June 24, 2021

Woebot

"Virtual help agents" have been developed to perform many support tasks such as counseling refugees and aiding people to access disability benefits. Now a software app named Woebot is claimed to perform actual talk therapy:

Chatbot Therapist

Created by a team at Stanford, "Woebot uses brief daily chat conversations, mood tracking, curated videos, and word games to help people manage mental health." For $39 per month, you can have Woebot check in with you once a day. It doesn't literally talk but communicates by Facebook Messenger. The chatbot mainly asks questions and works through a "decision tree" not unlike, in principle, a choose-your-own-adventure story. It follows the precepts of cognitive therapy, guiding patients to alter their own mental attitudes. Woebot is advertised as "a treatment in its own right," an accessible alternative for people who can't get conventional therapy for whatever reason. If the AI encounters someone in a mental-health crisis, "it suggests they seek help in the real world" and lists available resources. Text-based communication with one's "therapist" may sound less effective than oral conversation, yet in fact it was found that "the texting option actually reduced interpersonal anxiety."

It's possible that, within the limits of its abilities, this program may be better than a human therapist in that one respect. Many people open up more to a robot than to another person. Human communication may be hampered by the "fear of being judged." Alison Darcy, one of the creators of Woebot, remarks, "There’s nothing like venting to an anonymous algorithm to lift that fear of judgement." One of Woebot's forerunners in this field was a computer avatar "psychologist" called Ellie, developed at the University of Southern California. In a 2014 study of Ellie, "patients" turned out to be more inclined to speak freely if they thought they were talking to a bot rather than a live psychologist. Ellie has an advantage over Woebot in that she's programmed to read body language and tone of voice to "pick up signs of depression and post-traumatic stress disorder." Data gathered in these dialogues are sent to human clinicians. More on this virtual psychologist:

Ellie

Human beings often anthropomorphize inanimate objects. One comic strip in our daily paper regularly shows the characters interacting and arguing with an Alexa-type program like another person in the room and treating the robot vacuum as if it's at least as intelligent as a dog. So why not turn in times of emotional distress to a therapeutic AI? We can imagine a patient experiencing "transference" with Woebot—becoming emotionally involved with the AI in a one-way dependency of friendship or romantic attraction—a quasi-relationship that could make an interesting SF story.

Margaret L. Carter

Carter's Crypt

Thursday, January 07, 2021

Robot Pets

Here are two articles about robotic cats and dogs manufactured to serve as substitutes for live pets:

Robotic Pets Help Seniors Avoid Loneliness

Can Robot Pets Provide Comfort?

A FAQ posted by a company that makes these artificial pets:

Joy for All

The products are claimed to "feel, look and sound like real pets."

Some years ago, I remember reading news stories about robot dogs that looked like robots rather than real dogs. They were metallic instead of furry, which doesn't sound to me like a proper appearance for a surrogate pet. It would seem more like a clever toy, not a quasi-living animal. These present-day robotic pets look like animals, as shown in the still photos anyway. Or, at least, like cuddly stuffed animals—the kitty pictured on the first page linked above seems to resemble a toy more than a live cat. I didn't come across a video showing whether or not their movements appear natural rather than mechanical. They're described as interactive, but the FAQ linked above doesn't specifically state what they do. One of the articles does mention the robot dog performing some typical canine actions such as barking, panting, etc.

These devices remind me of Philip K. Dick's classic DO ANDROIDS DREAM OF ELECTRIC SHEEP? In that dystopian future, human-caused mass extinctions have made live animals extremely rare and expensive. Therefore, people buy artificial pets as substitutes, such as the electric sheep in the title. Fortunately, we're nowhere near that plight yet. Today's robot dogs and cats are meant as pet surrogates for isolated elderly persons who can't own real animals because of health, housing, or financial constraints.

Margaret L. Carter

Carter's Crypt

Thursday, September 10, 2020

More on Robots

If convenient, try to pick up a copy of the September 2020 NATIONAL GEOGRAPHIC, which should still be in stores at this time. The feature article, "Meet the Robots," goes into lengthy detail about a variety of different types of robots and their functions, strengths, and limitations. The cover shows a mechanical hand delicately holding a flower. The article on the magazine's website is behind a paywall, unfortunately.

Profusely illustrated, it includes photos of robots that range from human-like to vaguely humanoid to fully non-anthropomorphic. One resembles an ambulatory egg, another a mechanical octopus. As the text points out, form follows function. Some machines would gain nothing by being shaped like people, and for some tasks the human form would actually be more of a drawback than a benefit. Some of those devices perform narrowly defined, repetitive jobs such as factory assembly, while others more closely resemble what science-fiction fans think of as "robots"—quasi-intelligent, partly autonomous machines that can make decisions among alternatives. In many cases, they don't "steal jobs" but, rather, fill positions for which employers have trouble hiring enough live workers. Robots don't get sick or tired, don't suffer from boredom, and can spare human workers from exposure to hazards. On the other hand, the loss of some kinds of jobs to automation is a real problem, to which the article devotes balanced attention. Although an increasingly automated working environment may create new jobs in the long run, people can't be retrained for those hypothetical positions overnight.

Some robots carry their "brains" within their bodies, as organic creatures do, while others take remote direction from computers (Wi-Fi efficiency permitting—now there's an intriguing plot premise, a society dependent on robots controlled by a central hive-mind AI, which blackmailers or terrorists might threaten to disable). On the most lifelike end of the scale, an animated figure called Mindar, "a metal and silicone incarnation of Kannon," a deity in Japanese Buddhism, interacts with worshipers. Mindar contains no AI, but that feature may eventually be added. American company Abyss Creations makes life-size, realistic sex dolls able to converse with customers willing to pay extra for an AI similar to Alexa or Siri. Unfortunately for people envisioning truly autonomous robot lovers, from the neck down they're still just dolls.

We're cautioned against giving today's robots too much credit. They can't match us in some respects, such as the manipulative dexterity of human hands, bipedal walking, or plain "common sense." We need to approach them with "realistic expectations" rather than thinking they "are far more capable than they really are." Still, it seems wondrous to me that already robots can pick crops, milk cows, clean and disinfect rooms (I want one of those), excavate, load cargo, make deliveries in office buildings (even asking human colleagues to operate elevators for them), take inventory, guide patients through exercise routines, arrange flowers, and "help autistic children socialize." Considering that today's handheld phones are more intelligent than our first computer was (1982), imagine what lies ahead in the near future!

Margaret L. Carter

Carter's Crypt

Thursday, August 27, 2020

Robot Caretakers

Here's another article, long and detailed, about robot personal attendants for elderly people:

Meet Your Robot Caretaker

I was a little surprised that the first paragraph suggests those machines will be a common household convenience in "four or five decades." I'd have imagined their becoming a reality sooner, considering that robots able to perform some of the necessary tasks already exist. The article mentions several other countries besides Japan where such devices are now commercially available.

The article enumerates some of the potential advantages of robot health care aides: (1) There's no risk of personality conflicts, as may develop between even the most well-intentioned people. (2) Automatons don't need time off. (3) They don't get tired, confused, sick, or sloppy. (4) They can take the place of human workers in low-paid, often physically grueling jobs. (4) Automatons are far less likely to make mistakes, being "programmed to be consistent and reliable." (5) In case of error, they can correct the problem with no emotional upheaval to cloud their judgment or undermine the client-caretaker relationship. (6) The latter point relates to an actual advantage many prospective clients see in having nonhuman health aides; there's no worry about hurting a robot's feelings. (7) Likewise, having a machine instead of a live person to perform intimate physical care, such as bathing, would avoid embarrassment.

Contrary to hypothetical objections that health-care robots would deprive human aides of work, one expert suggests that "robots handling these tasks would free humans to do other, more important work, the kind only humans can do: 'How awesome would it be for the home healthcare nurse to play games, discuss TV shows, take them outside for fresh air, take them to get their hair done, instead of mundane tasks?'” Isolated old people need "human connection" that, so far, robots can't provide. The article does, however, go on to discuss future possibilities of emotional bonding with robots and speculates about the optimal appearances of robotic home health workers. A robot designed to take blood pressure, administer medication, etc. should have a shape that inspires confidence. On the other hand, it shouldn't look so human as to fall into the uncanny valley.

As far as "bonding" is concerned, the article points out that "for most people, connections to artificial intelligence or even mechanical objects can happen without even trying." The prospect of more lifelike robots and deeper bonding, however, raises another question: Would clients come to think of the automaton as so person-like that some of the robotic advantages listed above might be negated? I'm reminded of Ray Bradbury's classic story about a robot grandmother who wins the love of a family of motherless children, "I Sing the Body Electric"; one child fears losing the "grandmother" in death, like her biological mother.

Margaret L. Carter

Carter's Crypt

Thursday, July 23, 2020

Digisexuals

In 2018, Akihiko Kondo, a Japanese school administrator, married a hologram of a "cyber celebrity," Hatsune Miku, an animated character with no physical existence. She dwells in a Gatebox, "which looks like a cross between a coffee maker and a bell jar, with a flickering, holographic Miku floating inside." She can carry on simple conversations and do tasks such as switching lights on and off (like Alexa, I suppose). Although the marriage has no legal status, Kondo declares himself happy with his choice:

Rise of Digisexuals

According to a different article, Miku originated as "computer-generated singing software with the persona of a big-eyed, 16-year-old pop star with long, aqua-colored hair." Gatebox's offer of marriage registration forms for weddings between human customers and virtual characters has been taken up by at least 3,700 people in Japan (as of 2018). People who choose romance with virtual persons are known as "digisexuals." The CNN article linked above notes, "Digital interactions are increasingly replacing face-to-face human connections worldwide."

Of course, "digital interactions" online with real people on the other end are different from making emotional connections with computer personas. The article mentions several related phenomena, such as the robotic personal assistants for the elderly becoming popular in Japan. Also, people relate to devices such as Siri and Alexa as if they were human and treat robot vacuums like pets. I'm reminded of a cartoon I once saw in which a driver of a car listens to the vehicle's GPS arguing with his cell phone's GPS about which route to take. Many years ago, I read a funny story about a military supercomputer that transfers "her" consciousness into a rocket ship in order to elope with her Soviet counterpart. The CNN article compares those anthropomorphizing treatments of electronic devices to the myth of Pygmalion, the sculptor who constructed his perfect woman out of marble and married her after the goddess Aphrodite brought her to life. As Kondo is quoted as saying about holographic Miku's affectionate dialogue, "I knew she was programmed to say that, but I was still really happy." Still, the fact that he "completely controls the romantic narrative" makes the relationship radically different from human-to-human love.

Falling in love with a virtual persona presents a fundamental dilemma. As long as the object of affection remains simply a program designed to produce a menu of responses, however sophisticated, the relationship remains a pleasant illusion. If, however, the AI becomes conscious, developing selfhood and emotions, it can't be counted on to react entirely as a fantasy lover would. An attempt to force a self-aware artificial person to keep behaving exactly the way the human lover wishes would verge on erotic slavery. You can have either an ideal, wish-fulfilling romantic partner or a sentient, voluntarily responsive one, not both in the same person.

Margaret L. Carter

Carter's Crypt

Thursday, February 27, 2020

Robots Writing

The March 2020 issue of ROMANCE WRITERS REPORT (the magazine of Romance Writers of America) includes an article by Jeannie Lin about artificial intelligence programs writing prose, titled "The Robots Are Coming to Take Our Jobs." I was surprised to learn software that composes written material, including fiction, already exists. No matter how competent they eventually become, they can't quite take over our jobs yet, because (as Lin points out) U.S. copyright law requires that a work to be registered "was created by a human being." That provision, I suppose, leaves open the question of how much input a computer program can have into a work while it can still count as "created by a human being." Lin tested a program called GPT-2, which takes an existing paragraph of text as a model to write original material as a continuation of the sample work. The AI's follow-up to the opening of PRIDE AND PREJUDICE strikes me as barely coherent. On the other hand, the paragraph generated in response to a sample from one of Lin's own books comes out better, and Lin acknowledges, "The style was not unlike my own." The GPT-2, however, as Lin evaluates it, "is very limited. . . only capable of generating a paragraph of text and is unable to move beyond the very narrow confines of the lines provided."

Here's the website for that program:

OpenAI

The site claims its "unsupervised language model. . . generates coherent paragraphs of text, achieves state-of-the-art performance on many language modeling benchmarks, and performs rudimentary reading comprehension, machine translation, question answering, and summarization."

Here's an article about how AI writes content and some ways that technology is already being used—for instance, to produce sports reports for the Associated Press and other news outlets, financial reports, summaries of longer documents, personalized e-mails, and more:

Artificial Intelligence Can Now Write Amazing Content

Computer programs have also written novels, such as an autobiographical account of the AI's cross-country travels:

AI Wrote a Road Trip Novel

It may be reassuring to read that the result is described as "surreal" and not likely to be mistaken for a human-created book.

Nevertheless, there's a National Novel Generation Month (NaNoGenMo) competition in November for programmers trying to produce AI capable of composing novels. According to Lin's article, this challenge "has generated more than four hundred novels," although they're "more intriguing for how than were created than for their content." Here's the website for NaNoGenMo:

National Novel Generation Month

While I have no desire to cede my creative operations to a computer, I've often wished for a program that would take my detailed outline and compose a novel from it. The first-draft stage is the phase of writing I enjoy least; software that would present me with a draft ready to be edited would be a great boon. And apparently that's not an impossible goal, judging from a novel produced in collaboration between human authors and an AI, which advanced beyond the first round in a contest:

AI Novel

According to the article, "The novel was co-written by by Hitoshi Matsubara of Future University Hakodate, his team, and the AI they created. By all accounts, the novel was mostly written by the humans. The L.A. Times reported that there was about 80% human involvement." The process worked this way: “Humans decided the plot and character details of the novel, then entered words and phrases from an existing novel into a computer, which was able to construct a new book using that information.” Sounds like magic!

Still, we're in no danger of losing our jobs to robot authors yet. Aside from the novelty of the concept, I do wonder why we'd need AI-generated fiction on top of the thousands of books each year that already languish unread on Amazon pages, buried in the avalanche of new releases.

Margaret L. Carter

Carter's Crypt

Thursday, January 23, 2020

Anticipating Androids

In Mary Shelley's novel, Victor Frankenstein apparently constructed his creature by stitching together parts of cadavers. (His first-person narrative stays vague on the details.) Considering the rapid decay of dead flesh as well as the problem of reanimating such a construct, if we ever get organic androids or, as they're called in Dungeons and Dragons, flesh golems, they're more likely to be created by a method similar to this: Robotics experts at the University of Vermont have designed living robots made from frog cells, which were constructed and tested by biologists at Tufts University:

Xenobots

They're made of living cells derived from frog embryos. Joshua Bongard, one of the researchers on this project, describes the xenobots as "a new class of artifact: a living, programmable organism." The frog cells "can be coaxed to make interesting living forms that are completely different from what their default anatomy would be." Only a millimeter wide, they potentially "can move toward a target, perhaps pick up a payload (like a medicine that needs to be carried to a specific place inside a patient)—and heal themselves after being cut." They might also be able to perform such tasks as cleaning up radioactive materials and other contaminants or scraping plaque out of arteries. While this process doesn't amount to creating life, because it works with already living cells, it does reconfigure living organisms into novel forms. Although there's no hint of plans to build larger, more complicated artificial organisms, the article doesn't say that's impossible, either.

If an android constructed by this method could be made as complex as a human being, could it ever have intelligence? In an experiment I think I've blogged about in the past, scientists at the University of California, San Diego have grown cerebral "organoids"—miniature brains—from stem cells:

Lab-Grown Mini-Brains

These mini-brains, about the size of a pea, can "mimic the neural activity" of a pre-term fetus. Researchers hope these organoids can be used to study brain disorders and perhaps to replace lost or damaged areas of living human brains. At present, they can't think or feel. But suppose they're eventually grown large and complex enough to—maybe—develop sentience or even consciousness? In that case, it could be reasonably argued that they should have individual rights. The "disembodied brain in a jar" that's a familiar trope of SF and horror, is, according to the article, a highly unlikely outcome of this research. If these miniaturized brains ever became complex enough to transplant into a more highly developed version of the frog-cell "xenobots," however, the question of personhood would surely arise.

Margaret L. Carter

Margaret L. Carter

Thursday, February 21, 2019

Telepresence

I recently read an article about college students confined to their homes by medical issues (e.g., a pregnant woman on enforced bed rest) "attending" classes by means of telepresence robots. Here's a page explaining what these devices are and how they work:

What Telepresence Robots Can Do

Actually, these aren't true robots as I understand the term. They have no autonomy of any kind; they're moved by the user through remote control. The "robot" is a mobile device that allows the operator to see, hear, speak, and be seen in a remote location such as a classroom, hospital (telemedicine), or business meeting. It consists of a "computer, tablet, or smartphone-controlled robot which includes a video-camera, screen, speakers and microphones so that people interacting with the robot can view and hear its operator and the operator can simultaneously view what the robot is 'looking' at and 'hearing'." In other words, judging from the pictures, it's a computer screen rolling around on a mobile platform. Thus the user can relate to people at a distance almost as if he or she were in the room with them.

Telepresence reminds me of "The Girl Who Was Plugged In," by James Tiptree, Jr., except that Tiptree's story portrays a much darker vision. Beautiful androids without functional brains are grown in vitro for the explicit purpose of becoming celebrities, essentially famous for being famous, to encourage the public to buy the products of these media stars' commercial sponsors. Unknown to their fans, these constructs are mindless automata remotely operated by human controllers whose brains are linked to the androids. The girl of the title, born with a condition that makes her physically feeble as well as ugly (by conventional social standards), is one such operator. A young man falls in love with the android, thinking she's a real woman under some kind of mind control, and breaks into the booth occupied by the operator. The encounter doesn't end well for her. It's a grim, desperately sad story.

Fortunately, the telepresence robots now in use have no "uncanny valley" similarity to human beings, much less the capacity to pass for live people. So the exact situation imagined in Tiptree's story—with its dark implications regarding the objectification of women, the performance of gender roles, the valuation of outward appearance over personality and intelligence, the devaluing of people born less than perfect—won't materialize in our society anytime soon. If thoroughly human-seeming androids did become available, though, might some people with severe disabilities voluntarily choose to present themselves to the outside world through such proxies? That possibility could hold both promise and hazards for the individuals involved (not to mention the class divide between those who could afford an android proxy and those who wanted one but couldn't afford it).

In THE SHIP WHO SEARCHED, by Mercedes Lackey (one of the novels spun off from Anne McCaffrey's THE SHIP WHO SANG), the woman who acts as the "brain" of a brain ship, controlling all its functions and experiencing the environment through its sensor array from inside her permanently sealed shell, purchases a lifelike android for the purpose of direct, physical interaction with her "brawn" (her physically "normal" male partner). Unlike the dysfunctional situation in Tiptree's story, in THE SHIP WHO SEARCHED the man is fully aware of his partner's status, celebrates her gifts, and has fallen in love with her as a person despite the impossibility of physical contact. As with most technology, telepresence will doubtless have positive or negative impacts depending on how individuals use and relate to it.

Margaret L. Carter

Carter's Crypt

Thursday, January 10, 2019

Robots in the Home

More new developments in household robotics:

Are Domestic Robots the Way of the Future?

One problem foregrounded by this article is people's expectation for robots to look humanoid, versus the optimal shape for efficiently performing their functions. A real-world autonomous floor cleaner, after all, doesn't take the form of "a humanoid robot with arms" able to "push a vacuum cleaner." A related problem is that our household environments, unlike factories, are designed to be interacted with by human beings rather than non-humanoid machines. Research by scientists at Cornell University has been trying "to balance our need to be able to relate ­emotionally to robots with making them genuinely useful."

Dave Coplin, CEO of The Envisioners, promotes the concept of "social robotics":

Domestic Robots Are Coming in 2019

He advocates "trying to imbue emotion into communication between humans and robots," as, for example, training robots to understand human facial expressions. He even takes the rather surprising position that the household robot of the future, rather than a "slave" or "master," should be "a companion and peer to the family.” According to Coplin, the better the communication between us and our intelligent machines, the more efficiently they will work for us. Potential problems need to be solved, however, such as the difficulty of a robot's learning to navigate a house designed for human inhabitants, as mentioned above. Security of data may also pose problems, because the robot of the future will need access to lots of personal information in order to do its job.

In Robert Heinlein's THE DOOR INTO SUMMER, the engineer narrator begins by creating single-task robots that sound a bit like the equivalent of Roombas. Later, he invents multi-purpose robotic domestic servants with more humanoid-like shapes, because they have to be almost as versatile as human workers. We're still a long way from the android grandmother in one of Ray Bradbury's classic stories, but robots are being designed to help with elder care in Japan. According to the article cited above, some potential customers want robots that may offer "companionship" by listening to their troubles or keeping pets company while owners are out. Now, if the robot could walk the dog, too, that would really be useful. The January NATIONAL GEOGRAPHIC mentions medical robots that can draw blood, take vital signs, and even shift bedridden patients. One snag with such machines: To have the power to lift objects of significant weight, not to mention human adults, a robot has to be inconveniently heavy (as well as expensive).

On the subject of balancing usefulness with the need for relating emotionally: In Suzette Haden Elgin's poem "Too Human by Half," an elderly woman grows so attached to her lifelike household robot that she can't bear to replace it when it starts to malfunction. "Replace JANE? . . . Just because she's getting OLD?" Therefore, when the company launches its next model, "they made every one of the units look exactly like a broom."

Margaret L. Carter

Carter's Crypt

Thursday, October 18, 2018

AI Rights

Here's an article on the PBS website exploring the issue of what might happen if artificial intelligences were granted the status of legal persons:

Artificial Intelligence Personhood

Corporations are already "persons" under the law, with free-speech rights and the capacity to sue and be sued. The author of this article outlines a legal procedure by which a computer program could become a limited liability company. He points out, somewhat alarmingly, "That process doesn’t require the computer system to have any particular level of intelligence or capability." The "artificial intelligence" could be simply a decision-making algorithm. Next, however, he makes what seems to me an unwarranted leap: "Granting human rights to a computer would degrade human dignity." First, bestowing some "human rights" on a computer wouldn't necessarily entail giving it full citizenship, particularly the right to vote. As the article mentions, "one person, one vote" would become meaningless when applied to a program that could make infinite copies of itself. But corporations have been legal "persons" for a long time, and they don't get to vote in elections.

The author cites the example of a robot named Sophia, who (in October 2017) was declared a citizen of Saudi Arabia:

Saudi Arabia Grants Citizenship to a Robot

Some commentators noted that Sophia now has more rights than women or migrant workers in that country. If Sophia's elevated status becomes an official precedent rather than merely a publicity stunt for the promotion of AI research, surely the best solution to the perceived problem would be to improve the rights of naturally born persons. In answer to a question about the dangers of artificial intelligence, Sophia suggests that people who fear AI have been watching "too many Hollywood movies."

That PBS article on AI personhood warns of far-fetched threats that are long-established cliches in science fiction, starting with, "If AI systems became more intelligent than people, humans could be relegated to an inferior role." Setting aside the fact that we have a considerable distance to go before computer intelligence attains a level anywhere near ours, giving us plenty of time to prepare, remember that human inventors design and program those AI systems. Something like Asimov's Laws of Robotics could be built in at a fundamental level. The most plausible of the article's alarmist predictions, in my opinion, is the possibility of a computer's accumulating "immortal wealth." It seems more likely, however, that human tycoons might use the AI as a front, not that it would use them as puppets.

Furthermore, why would an intelligent robot or computer want to rule over us? As long as the AI has the human support it needs to perform the function it was designed for, why would it bother wasting its time or brainpower on manipulating human society? An AI wouldn't have emotional weaknesses such as greed for money or lust for power, because emotion is a function of the body (adrenaline, hormone imbalances, accelerated breath and heartbeat, etc.). Granted, it might come to the rational conclusion that we're running the world inefficiently and need to be ruled for the benefit of ourselves and our electronic fellow citizens. That's the only immediate pitfall I can see in giving citizenship rights to sapient, rational machines that are programmed for beneficence. The idea of this potential hazard isn't new either, having been explored by numerous SF authors, as far back as Jack Williamson's "With Folded Hands" (1947). So relax, HAL won't be throwing us out the airlock anytime soon.

Margaret L. Carter

Carter's Crypt

Thursday, September 06, 2018

The Need for a Wife?

The 1971 launch of MS magazine included a now-classic essay titled "I Want a Wife," by Judy Syfers. It's very short; you can read the whole thing here:

I Want a Wife

The author, of course, isn't asking for a life's companion. What she wants is a multi-purpose appliance called a "wife" to run the household, handle persnickety domestic details, and deal with the demands of the outside world. (Note the tour-de-force of never applying a pronoun—and therefore a gender—to this hypothetical perfect wife.) For example:

"I want a wife to keep track of the children’s doctor and dentist appointments. And to keep track of mine, too. I want a wife to make sure my children eat properly and are kept clean. I want a wife who will wash the children’s clothes and keep them mended. I want a wife who is a good nurturant attendant to my children, who arranges for their schooling, makes sure that they have an adequate social life with their peers, takes them to the park, the zoo, etc. I want a wife who takes care of the children when they are sick, a wife who arranges to be around when the children need special care, because, of course, I cannot miss classes at school. My wife must arrange to lose time at work and not lose the job. It may mean a small cut in my wife’s income from time to time, but I guess I can tolerate that. Needless to say, my wife will arrange and pay for the care of the children while my wife is working. I want a wife who will take care of my physical needs. I want a wife who will keep my house clean. A wife who will pick up after me. I want a wife who will keep my clothes clean, ironed, mended, replaced when need be, and who will see to it that my personal things are kept in their proper place so that I can find what I need the minute I need it. I want a wife who cooks the meals, a wife who is a good cook. I want a wife who will plan the menus, do the necessary grocery shopping, prepare the meals, serve them pleasantly, and then do the cleaning up while I do my studying."

And how about this zinger? "I want a wife to go along when our family takes a vacation so that someone can continue to care for me and my children when I need a rest and change of scene."

When ours was a two-income household with school-age children at home, this essay struck a chord with me. As the author concludes, who wouldn't want a wife like that? Has any actual wife ever enjoyed the services of such a convenient paragon? It's an established truism that in two-career marriages, even those in which the husband shares household chores, the wife typically has the ultimate responsibility to ensure that everything gets done, and she performs most of the "emotional work" of maintaining family and social ties. On TV, Mrs. Brady and Mrs. Muir had faithful housekeepers. Still, the mothers in those sitcoms didn't lie around and relax—or devote themselves solely to intellectual enrichment. While Mrs. Muir was a professional writer, she spent plenty of time on household tasks. Both she and Mrs. Brady not only directed the housekeeper but joined in the hands-on work. What about previous eras, when middle- and upper-class women routinely had servants? Nevertheless, they had to oversee the servants, plan the meals, etc., not to mention hiring the housekeeper, nanny, maids, and other staff. Granted, maybe aristocratic ladies managed to shift all the domestic responsibility to the housekeeper and the butler, with nothing to do themselves but approve menus; their "wife" duties probably focused on maintaining the family's social position. Also, if we traveled back to, say, the nineteenth century and enjoyed the services of such workers, from our modern perspective we couldn't help being aware of how we were exploiting them.

If you're familiar with the stories of P. G. Wodehouse, you'll remember feckless bachelor Bertie Wooster's omnicompetent valet, Jeeves. What we all really need isn't a wife, but a Jeeves. Aside from a few references to his relatives, Jeeves doesn't seem to have a life outside his employment. He not only manages Bertie's apartment, meals, clothes, and other mundane necessities with impeccable perfection but often steps in to untangle Bertie's personal crises.

If we could afford a Jeeves in reality, though, we'd have to acknowledge his right to a life of his own, not to mention being nagged by our consciences for underpaying him. What we actually want is a Jeeves-type robot. Alexa and Siri can answer questions, carry out some tasks, and remind us of appointments, but otherwise we have quite a distance to go in terms of artificial servants. Wouldn't it be ideal to have the multi-skilled domestic robot often portrayed in science fiction, as affordable as a car and as efficient as Wodehouse's ideal "gentleman's gentleman"? Only one potential problem: A machine that could perform all those jobs with the nuanced expertise of a Jeeves would have to approach true AI. And then it might demand its rights as a sentient being, and we'd have to worry about exploiting it.

Margaret L. Carter

Carter's Crypt

Thursday, March 29, 2018

Robot Children, Puppies, and Fish

The March issue of SCIENTIFIC AMERICAN contains a new article on improving AI by developing robots that learn like children. Unfortunately, non-subscribers can't read the full article online, only a teaser:

Robots Learning Like Children

As we know, computer brains perform very well at many tasks that are hard for human beings, such as rapid math calculations and games such as chess and Go—systems with a finite number of clearly defined rules. Human children, by contrast, learn "by exploring their surroundings and experimenting with movement and speech." For a robot to learn that way, it has to be able to interact with its environment physically and process sensory input. Roboticists have discovered that both children and robots learn better when new information is consistently linked with particular physical actions. "Our brains are constantly trying to predict the future—and updating their expectations to match reality." A fulfilled prediction provides a reward in itself, and toddlers actively pursue objects and situations that allow them to make and test predictions. To simulate this phenomenon in artificial intelligence, researchers have programmed robots to maximize accurate predictions. The "motivation to reduce prediction errors" can even impel androids to be "helpful" by completing tasks at which human experimenters "fail." A puppy-like machine called the Sony AIBO learned to do such things as grasp objects and interact with other robots without being programmed for those specific tasks. The general goal "to autonomously seek out tasks with the greatest potential for learning" spontaneously produced those results. Now, that sounds like what we'd call learning!

On a much simpler level, MIT has developed a robotic fish that can swim among real sea creatures without disturbing them, for more efficient observation. This device operates by remote control:

Soft Robotic Fish

The Soft Robotic Fish (SoFi) doesn't really fit my idea of a robot. To me, a true robot moves on its own and makes decisions, like the learning-enabled AI brains described above—or at least performs choices that simulate the decision-making process. The inventors of SoFi, however, hope to create a future version that would be self-guiding by means of machine vision. Still, an artificial fish programmed to home in on and follow an individual live fish is a far cry from robots that learn new information and tasks by proactively exploring their environments.

Can the latter eventually develop minds like ours? The consensus seems to be that we're nowhere near understanding the human mind well enough to approach that goal. In view of the observed fact that "caregivers are crucial to children's development," one researcher quoted in the SCIENTIFIC AMERICAN article maintains that a robot might be able to become "truly humanlike" only "if somebody can take care of a robot like a child." There's a story here, which has doubtless already been written more than once; an example might be the film A.I. ARTIFICIAL INTELLIGENCE, which portrays a tragic outcome for the android child, programmed to love its/his "parents" but rejected when the biological son returns to the family.

One episode of SESAME STREET defined a living animal or person as a creature that moves, eats, and grows. Most robots can certainly move on their own. Battery-operated robots can be programmed to seek electrical outlets and recharge themselves, analogous to taking nourishment. Learning equals growth, in a sense. Is a machine capable of those functions "alive"?

Margaret L. Carter

Carter's Crypt