The March issue of SCIENTIFIC AMERICAN contains a new article on improving AI by developing robots that learn like children. Unfortunately, non-subscribers can't read the full article online, only a teaser:Robots Learning Like Children
As we know, computer brains perform very well at many tasks that are hard for human beings, such as rapid math calculations and games such as chess and Go—systems with a finite number of clearly defined rules. Human children, by contrast, learn "by exploring their surroundings and experimenting with movement and speech." For a robot to learn that way, it has to be able to interact with its environment physically and process sensory input. Roboticists have discovered that both children and robots learn better when new information is consistently linked with particular physical actions. "Our brains are constantly trying to predict the future—and updating their expectations to match reality." A fulfilled prediction provides a reward in itself, and toddlers actively pursue objects and situations that allow them to make and test predictions. To simulate this phenomenon in artificial intelligence, researchers have programmed robots to maximize accurate predictions. The "motivation to reduce prediction errors" can even impel androids to be "helpful" by completing tasks at which human experimenters "fail." A puppy-like machine called the Sony AIBO learned to do such things as grasp objects and interact with other robots without being programmed for those specific tasks. The general goal "to autonomously seek out tasks with the greatest potential for learning" spontaneously produced those results. Now, that sounds like what we'd call learning!
On a much simpler level, MIT has developed a robotic fish that can swim among real sea creatures without disturbing them, for more efficient observation. This device operates by remote control:Soft Robotic Fish
The Soft Robotic Fish (SoFi) doesn't really fit my idea of a robot. To me, a true robot moves on its own and makes decisions, like the learning-enabled AI brains described above—or at least performs choices that simulate the decision-making process. The inventors of SoFi, however, hope to create a future version that would be self-guiding by means of machine vision. Still, an artificial fish programmed to home in on and follow an individual live fish is a far cry from robots that learn new information and tasks by proactively exploring their environments.
Can the latter eventually develop minds like ours? The consensus seems to be that we're nowhere near understanding the human mind well enough to approach that goal. In view of the observed fact that "caregivers are crucial to children's development," one researcher quoted in the SCIENTIFIC AMERICAN article maintains that a robot might be able to become "truly humanlike" only "if somebody can take care of a robot like a child." There's a story here, which has doubtless already been written more than once; an example might be the film A.I. ARTIFICIAL INTELLIGENCE, which portrays a tragic outcome for the android child, programmed to love its/his "parents" but rejected when the biological son returns to the family.
One episode of SESAME STREET defined a living animal or person as a creature that moves, eats, and grows. Most robots can certainly move on their own. Battery-operated robots can be programmed to seek electrical outlets and recharge themselves, analogous to taking nourishment. Learning equals growth, in a sense. Is a machine capable of those functions "alive"?
Margaret L. CarterCarter's Crypt