Showing posts with label computers. Show all posts
Showing posts with label computers. Show all posts

Thursday, December 14, 2023

Decoding Brain Waves

Can a computer program read thoughts? An experimental project uses AI as a "brain decoder," in combination with brain scans, to "transcribe 'the gist' of what people are thinking, in what was described as a step toward mind reading":

Scientists Use Brain Scans to "Decode" Thoughts

The example in the article discusses how the program interprets what a person thinks while listening to spoken sentences. Although the system doesn't translate the subject's thoughts into the exact same words, it's capable of accurately rendering the "gist" into coherent language. Moreover, it can even accomplish the same thing when the subject simply thinks about a story or watches a silent movie. Therefore, the program is "decoding something that is deeper than language, then converting it into language." Unlike earlier types of brain-computer interfaces, this noninvasive system doesn't require implanting anything in the person's brain.

However, the decoder isn't perfect yet; it has trouble with personal pronouns, for instance. Moreover, it's possible for the subject to "sabotage" the process with mental tricks. Participating scientists reassure people concerned about "mental privacy" that the system works only after it has been trained on the particular person's brain activity through many hours in an MRI scanner. Nevertheless, David Rodriguez-Arias Vailhen, a bioethics professor at Spain's Granada University, expresses apprehension that the more highly developed versions of such programs might lead to "a future in which machines are 'able to read minds and transcribe thought'. . . warning this could possibly take place against people's will, such as when they are sleeping."

Here's another article about this project, explaining that the program functions on a predictive model similar to ChatGPT. As far as I can tell, the system works only with thoughts mentally expressed in words, not pure images:

Brain Activity Decoder Can Read Stories in People's Minds

Researchers at the University of Texas in Austin suggest as one positive application that the system "might help people who are mentally conscious yet unable to physically speak, such as those debilitated by strokes, to communicate intelligibly again."

An article on the Wired site explores in depth the nature of thought and its connection with language from the perspective of cognitive science.

Decoders Won't Just Read Your Mind -- They'll Change It

Suppose the mind isn't, as traditionally assumed, "a self-contained, self-sufficient, private entity"? If not, is there a realistic risk that "these machines will have the power to characterize and fix a thought’s limits and bounds through the very act of decoding and expressing that thought"?

How credible is the danger foreshadowed in this essay? If AI eventually gains the power to decode anyone's thoughts, not just those of individuals whose brain scans the system has been trained on, will literal mind-reading come into existence? Could a future Big Brother society watch citizens not just through two-way TV monitors but by inspecting the contents of their brains?

Margaret L. Carter

Please explore love among the monsters at Carter's Crypt.

Thursday, June 15, 2023

The Internet Knows All

This week I acquired a new HP computer to replace my old Dell, which had started unpredictably freezing up at least once per day. Installing Windows 11 didn't fix it. It had reached the point where even CTRL-ALT-DEL didn't unfreeze it; I had to turn it off manually and restart every time it failed. It feels great to have a reliable machine again.

Two things struck me about the change: First, the price of the new one, bundled with a keyboard and mouse, about $500. Our first computer, an Apple II+ purchased as a gift at Christmas of 1982, cost over $2000 with, naturally, nowhere near the capabilities of today's devices. No hard drive, no Windows or Apple equivalent therof, and of course no internet. And in that year $2000 was worth a whole lot more than $2000 now. Imagine spending today's equivalent in 2023 dollars for a home electronic device. Back then, it was a serious financial decision that put us into debt for a long time. Thanks to advances in technology, despite inflation some things DO get cheaper. An amusing memory: After unveiling the wondrous machine in 1982, my husband decreed, "The kids are never going to touch this." LOL. That rule didn't last long! Nowadays, in contrast, we'd be lost if we couldn't depend on our two youngest offspring (now middle-aged) for tech support.

The second thing that struck me after our daughter set up the computer: How smoothly and, to my non-tech brain, miraculously, Windows and Google Chrome remembered all my information from the previous device. Bookmarks, passwords, document files (on One Drive), everything I needed to resume work almost as if the hardware hadn't been replaced. What a tremendous convenience. On the other hand, it's a little unsettling, too. For me, the most eerie phenomenon is the way many websites know information from other websites they have no connection to. For example, the weather page constantly shows me ads for products I've browsed on Amazon. Sometimes it seems that our future AI overlords really do see all and know all.

In response to recent warnings about the "existential threat" posed by AI, science columnist Keith Tidman champions a more optimistic view:

Dark Side to AI?

He points out the often overlooked difference between weak AI and strong AI. Weak AI, which already exists, isn't on the verge of taking over the world. Tidman, however, seems less worried about the subtle dangers of the many seductively convenient features of the current technology than most commentators are. As for strong AI, it's not here yet, and even if it eventually develops human-like intelligence, Tidman doesn't think it will try to dominate us. He reminds us, "At the moment, in some cases what’s easy for humans to do is extraordinarily hard for machines to do, while the converse is true, too." If this disparity "evens out" in the long run, he nevertheless believes, "Humans won’t be displaced, or harmed, but creative human-machine partnerships will change radically for the better."

An amusing incidental point about this article: On the two websites I found by googling for it, one page is headlined, "There Is Inevitable Dark Side to AI" and the other, "There Is No Inevitable Dark Side to AI." So even an optimistic essay can be read pessimistically! (Unless the "No" was just accidentally omitted in the first headline. But it still looks funny.)

Margaret L. Carter

Carter's Crypt

Thursday, June 08, 2023

Existential Threat?

As you may have seen in the news lately, dozens of experts in artificial intelligence have supported a manifesto claiming AI could threaten the extinction of humanity:

AI Could Lead to Extinction

Some authorities, however, maintain that this fear is overblown and "a distraction from issues such as bias in systems that are already a problem" and other "near-term harms."

Considering the "prophecies of doom" in detail, we find that the less radically alarmist doom-sayers aren't talking about Skynet, HAL 9000, or even self-aware Asimovian robots circumventing the Three Laws to dominate their human creators. More immediately realistic warnings call attention to risks posed by such things as the "deep fake" programs Rowena discusses in her recent post. In the near future, we could see powerful AI "drive an exponential increase in the volume and spread of misinformation, thereby fracturing reality and eroding the public trust, and drive further inequality, particularly for those who remain on the wrong side of the digital divide."

On the other hand, a member of an e-mail list I subscribe to has written an essay maintaining that the real existential threat of advanced AI doesn't consist of openly scary threats, but irresistibly appealing cuteness:

Your Lovable AI Buddy

Suppose, in the near future, everyone has a personal AI assistant, more advanced and individually programmed than present day Alexa-type devices? Not only would this handheld, computerized friend keep track of your schedule and appointments, preorder meals from restaurants, play music and stream videos suited to your tastes, maybe even communicate with other people's AI buddies, etc., "It knows all about you, and it just wants to make you happy and help you enjoy your life. . . . It would be like a best friend who’s always there for you, and always there. And endlessly helpful." As he mentions, present-day technology could probably create a device like that now. And soon it would be able to look much more lifelike than current robots. Users would get emotionally attached to it, more so than with presently available lifelike toys. What could possibly be the downside of such an ever-present, "endlessly helpful" friend or pet?

Not so fast. If we're worried about hacking and misinformation now, think of how easily our hypothetical AI best friend could subtly shape our view of reality. At the will of its designers, it could nudge us toward certain political or social viewpoints. It could provide slanted, "carefully filtered" answers to sensitive questions. This development wouldn't require "a self-aware program, just one that seems to be friendly and is capable of conversation, or close enough." Building on its vast database of information collected from the internet and from interacting with its user, "It wouldn’t just be trained to emotionally connect with humans, it would be trained to emotionally manipulate humans."

In a society with a nearly ubiquitous as well as almost omniscient product like that, the disadvantaged folks "on the wrong side of the digital divide" who couldn't afford one might even be better off, at least in the sense of privacy and personal freedom.

Margaret L. Carter

Carter's Crypt

Thursday, February 09, 2023

Creative AI?

There's been a lot of news in the media lately about AI programs that generate text or images. One of the e-mail lists I subscribe to recently had a long thread about AI text products and especially art. Some people argued about whether a program that gets "ideas" (to speak anthropomorphically) from many different online images and combines multiple elements from them to produce a new image unlike any of the sources is infringing artists' copyrights. I tend to agree with the position that such a product is in no sense a "copy" of any particular original.

Here's the Wikipedia article on ChatGPT (Chat Generative Pre-trained Transformer):

ChatCPT

The core function of that program is "to mimic a human conversationalist." However, it does many other language-related tasks, such as "to write and debug computer programs" and "to compose music, teleplays, fairy tales, and student essays" and even "answer test questions," as well as other functions such as playing games and emulating "an entire chat room." It could also streamline rote tasks such as filling out forms. It has limitations, though, which are acknowledged by its designers. Like any AI, it's constrained by its input, and it may sometimes generate nonsense. When asked for an opinion or judgment, the program replies that, being an AI, it doesn't have feelings or opinions.

This week the Baltimore SUN ran an editorial about the potential uses and abuses of the program. It includes a conversation with ChatGPT, asking about various issues of interest to Maryland residents. For instance, the AI offers a list of "creative" uses for Old Bay seasoning. It produces grammatically correct, coherent prose but tends to answer in generalizations that would be hard to disagree with. One drawback is that it doesn't provide attribution or credit for its sources. As the editorial cautions, "That makes fact-checking difficult, and puts ChatGPT (and its users) at risk of both plagiarizing the work of others and spreading misinformation."

A Chat with ChatGPT

Joshua Wilson, an associate professor of education at the University of Delaware, discusses the advantages and limitations of ChatGPT:

Writing Without Thinking?

It can churn out an essay on a designated topic, drawing on material it garners from the internet. A writer could treat this output as a a pre-first-draft that the human creator could then revise and elaborate. It's an "optimal synthesizer" but lacks "nuance and perspective." To forbid resorting to ChatGPT would be futile, he thinks; instead, we need to figure out the proper ways to use it. He sees it as a valid device to save time and effort, provided we regard its product as a "starting point and not a final destination."

David Brooks, a NEW YORK TIMES columnist, offers cautionary observations on art and prose generated by AI programs:

Major in Being Human

He distinguishes between tasks a computer program can competently perform and those that require "a humanistic core," such as "passion, pain, longings. . . imagination, bursts of insight, anxiety and joy." He advises the next generation to educate themselves for "skills that machines will not replicate," e.g., creativity, empathy, a "distinct personal voice," etc.

Some school systems have already banned ChatGPT in the classroom as a form of cheating. Moreover, AI programs exist with the function of detecting probable AI-generated prose. From what I've read about text-generating and art-producing programs, it seems to me that in principle they're tools like spellcheck and electronic calculators, even though much more complex. Surely they can be used for either fruitful or flawed purposes, depending on human input.

Margaret L. Carter

Carter's Crypt

Thursday, July 21, 2022

The Future of Elections

Earlier this week, we voted in the primary election in this state. Thinking about voting reminded me of a story I read many years ago (and don't remember its title or author). This speculative piece on how elections might work in the distant future proposed a unique procedure that could function only with a near-omniscient AI accumulating immense amounts of data.

After analyzing the demographics of the country in depth, the central computer picks a designated voter. This person, chosen as most effectively combining the typical characteristics of all citizens, votes in the national election on behalf of the entire population. The really unsettling twist in the tale is that the "voter" doesn't even literally vote. He (in the story, the chosen representative is a man) answers a battery of questions, if I recall the method correctly. The computer, having collated his responses, determines which candidates and positions he would support.

This method of settling political issues would certainly make things simpler. No more waiting days or potentially weeks for all the ballots to be counted. No contesting of results, since the single aggregate "vote" would settle everything on the spot with no appeal to the AI's decision.

The story's premise seems to have an insurmountable problem, however, regardless of the superhuman intelligence, vast factual knowledge, and fine discrimination of the computer. Given the manifold racial, political, economic, ethnic, and religious diversity of the American people, how could one "typical" citizen stand in for all? An attempt to combine everybody's traits would inevitably involve many direct, irreconcilable contradictions. The AI might be able to come up with one person who satisfactorily represents the majority. When that person's "vote" became official, though, the political rights of minorities (religious, racial, gender, or whatever) would be erased.

A benevolent dictatorship by an all-knowing, perfectly unbiased computer (if we could get around the GIGO principle of its reflecting the biases of its programmers) does sound temptingly efficient at first glance. But I've never read or viewed a story, beyond a speculative snippet such as the one described above, about such a society that ultimately turned out well. Whenever the Enterprise came across a computer-ruled world in the original STAR TREK, Kirk and Spock hastened to overthrow the AI "god" in the name of human free will.

Margaret L. Carter

Carter's Crypt

Thursday, September 09, 2021

More Futuristic Forecasts

"Prediction is hard, especially about the future." Over the past week, I've been rereading LIFE AND TIME, a 1978 collection of essays by Isaac Asimov (some of them written as early as the 1960s). In contrast to the imaginative speculations in his fiction, these articles include serious forecasts about potential developments in technology and society.

Most strikingly, he anticipated the internet, a global repository of information anybody could draw upon. He envisioned everybody on Earth having a personal "channel" just as most people now have individual telephone numbers. We sort of have that system now, considering the unique IP address of each computer as a personal channel. Also, an individual tablet or smart phone serves the same function. Incidentally, J. D. Robb's "In Death" SF mystery series anticipated today's smart phone as the pocket "link" most people in her fictional future carry with them, long before such devices became common in real life. Asimov hailed the future possibilities of lifelong, customized learning through the worldwide computer bank. Granted, many people benefit from the internet in that way, yet the satirical lament too often holds some truth: We have a network that gives us access to the entire accumulated knowledge of humanity, and we use it mostly for political rants and pictures of cats. Asimov suggested computer learning could overcome one of the main disadvantages of our educational system, the necessity for one teacher to instruct a large group of students, making it impossible to adjust lessons to the comprehension level, interests, and learning style of each individual. Computer education could effectively give each pupil a private tutor. Although we've recently had over a year of experience with online education, it's still been mainly a group-oriented activity. Advanced AI might fulfill Asimov's vision. He also foresaw cashless monetary transactions, electronic transmission of documents, and virtual rather than in-person business meetings, all of which exist now. Unfortunately, his expectation that these developments would greatly reduce travel and its attendant pollution hasn't come to pass yet, probably because many employers are reluctant to embrace the full potential of remote work.

On some topics, he was too pessimistic. For example, he foresaw the world population reaching seven billion by the early 21st century, a point we've already passed. However, we're not forced to survive on synthetic nourishment derived from genetically engineered microbes, as he speculated might become necessary. We still eat a lavish variety of fresh foods. He seemed to believe a population of the current level or higher would reduce humankind to universal misery; while many of the planet's inhabitants do live in abject circumstances, Earth hasn't yet become a dreary anthill.

Not surprisingly, Asimov favored genetically modified agricultural products, which already exist, although not in some of the radically altered or enhanced forms he imagined. He also focused on the hope of cleaner energy, perhaps from controlled fusion or large-scale solar power. He proposed solar collectors in orbit, beaming energy down to Earth, far from a practical solution at present. And, as everyone knows, fusion-generated power is only twenty years away—and has been for a generation or more. :) Asimov predicted autonomous cars, almost commercially viable in the present. He also discussed the potential advantages of flying cars, however, without apparently considering the horror of city skies thronged with thousands of individual VTOL vehicles piloted by hordes of amateurs. Maybe self-driving vehicles would solve that problem, being programmed to avoid collisions.

To save energy on cooling and heating as well as to shelter inhabitants from severe weather, he proposed moving cities underground, as in his novel THE CAVES OF STEEL. This plan might be the optimal strategy for colonizing the Moon or Mars. I doubt most Earth citizens would accept it unless it beomes the only alternative to a worldwide doom scenario. Asimov, a devoted claustrophile, seemed to underestimate the value the average person puts on sunshine, fresh air, nature, and open space.

In general, he tended to be over-pessimistic about the fate looming over us unless we solve the problem of overpopulation right now (meaning, from his viewpoint, in the 1980s). As dire as that problem is in the long run, the decades since the publication of the essays in LIFE AND TIME demonstrate that Earth is more resilient than Asimov (and many prognosticators at that time) feared. Moreover, the worldwide birthrate is declining, although the shift isn't spread evenly over the world and for the present global population continues to rise through sheer momentum. Asimov analyzed the issue of whether a demographic pattern of old people far outnumbering younger ones would lead to a rigid, reactionary culture. He maintained that the mental stagnation traditionally associated with aging could be prevented by an emphasis on lifelong learning and creativity. He devoted no attention to the more immediate problem of declining birthrates some nations already begin to face now—a young workforce that isn't large enough to support its millions of retired and often infirm elders. Encouraging immigration would help. (But that's "modpol"—shorthand for modern politics on one list I subscribe to—so I'll say no more about it.) In the long run, however, if and when prosperity rises and births decline worldwide, there won't be anyplace for a supply of young workers to immigrate from.

Asimov seemed over-optimistic about the technological marvels and wondrous lifestyle we'll all enjoy IF over-population and its attendant problems are conquered. He envisioned the 21st century as a potential earthly paradise. Judging from the predictions of such optimists over many decades, just as controlled fusion is always twenty years away, utopia is always fifty years away.

Margaret L. Carter

Carter's Crypt

Thursday, August 26, 2021

Can AI Be a Bad Influence?

In a computer language-learning experiment in 2016, a chat program designed to mimic the conversational style of teenage girls devolved into spewing racist and misogynistic rhetoric. Interaction with humans quickly corrupted an innocent bot, but could AI corrupt us, too?

AI's Influence Can Make Humans Less Moral

Here's a more detailed explanation (from 2016) of the Tay program and what happened when it was let loose on social media:

Twitter Taught Microsoft's AI Chatbot to Be a Racist

The Tay Twitter bot was designed to get "smarter" in the course of chatting with more and more users, thereby, it was hoped, "learning to engage people through 'casual and playful conversation'." Unfortunately, spammers apparently flooded it with poisonous messages, which it proceeded to imitate and amplify. If Tay was ordered, "Repeat after me," it obeyed, enabling anyone to put words in its virtual mouth. However, it also started producing racist, misogynistic, and just plain weird utterances spontaneously. This debacle raises questions such as "how are we going to teach AI using public data without incorporating the worst traits of humanity?"

The L.A. TIMES article linked above, with reference to the Tay episode as a springboard for discussion, explores this problem in more general terms. How can machines "make humans themselves less ethical?" Among other possible influences, AI can offer bad advice, which people have been noticed to follow as readily as they do online advice from live human beings; AI advice can "provide a justification to break ethical rules"; AI can act as a negative role model; it can be easily used for deceptive purposes; outsourcing ethically fraught decisions to algorithms can be dangerous. The article concludes that "whenever AI systems take over a new social role, new risks for corrupting human behavior will emerge."

This issue reminds me of Isaac Asimov's Three Laws of Robotics, especially since I've recently been rereading some of his robot-related fiction and essays. As you'll recall, the First Law states, "A robot may not injure a human being or, through inaction, allow a human being to come to harm." In one of Asimov's early stories, a robot learns to lie in order to tell people what they want to hear. As this machine perceives the problem of truth and lies, the revelation of distressing truths would cause humans emotional pain, and emotional harm is still harm. Could AI programs be taught to avoid causing emotional and ethical damage to their human users? The potential catch is that a computer intelligence can acquire ethical standards only by having them programmed in by human designers. As a familiar precept declares, "Garbage in, garbage out." Suppose programmers train an AI to regard the spreading of bizarre conspiracy theories as a vital means of protecting the public from danger?

It's a puzzlement.

Margaret L. Carter

Carter's Crypt

Thursday, August 19, 2021

Mind-Reading Technology

Scientists from the University of California, San Francisco have developed a computer program to translate the brain waves of a 36-year-old paralyzed man into text:

Scientists Translate Brain Waves

They implanted an array of electrodes into the sensorimotor cortex of the subject's brain and "used 'deep-learning algorithms' to train computer models to recognize and classify words from patterns in the participant’s brain activity." The training process consisted of showing words on a screen and having the man think about saying them, going through the mental activity of trying to say the words, which he'd lost the physical ability to do. Once the algorithm had learned to match brain patterns to particular words, the subject could produce text by thinking of sentences that included words from the program's vocabulary. Using this technology, he could generate language at a rate of about fifteen words per minute (although not error-free) as opposed to only five words per minute while operating a computer typing program with movements of his head.

Training the program to this point wasn't easy, apparently. The course took 48 sessions over a period of 81 weeks. Still, it's the closest thing to "mind-reading" we have so far, a significant advance over techniques that let a patient control a prosthetic limb by thought alone. According to Dr. Lee H. Schwamm, an officer of the American Stroke Association, “This study represents a transformational breakthrough in the field of brain-computer interfaces."

Here's an article about an earlier experiment in which a paralyzed man learned to produce sentences with "a computer system that turns imagined handwriting into words" at a rate of 18 words per minute.

Mindwriting Brain Computer

The hardware consists of "small, implantable computer chips that read electrical activity straight from the brain." The subject imagined writing letters in longhand, mentally going through the motions. At the same time, the scientists "recorded activity from the brain region that would have controlled his movements." The collected recordings were used to train the AI to translate the man's "mindwriting" into words on a screen. Eventually the algorithm achieved a level of 94.1% accuracy—with the aid of autocorrect, 99%.

While those programs are far from literal telepathy, the ability to read any thoughts that rise to the surface of a subject's mind, they still constitute an amazing advance. As long as such technology requires hardware implanted in an individual's brain, however, we won't have to worry about our computer overlords randomly reading our minds.

Margaret L. Carter

Carter's Crypt

Thursday, July 15, 2021

Monopolies and Interoperabilty

Another LOCUS article by Cory Doctorow on monopolies and trust-busting:

Tech Monopolies

He begins this essay by stating that he doesn't oppose monopolies for the sake of competition or choice as ends in themselves. He cares most about "self-determination." By this he means the individual consumer "having the final say over how you live your life." When a small handful of companies controls any given field or industry, customers have only a limited range of products or services to choose among, preselected by those companies, even if this limitation remains mostly invisible to the average consumer. Not surprisingly, Doctorow focuses on this constraint as imposed by Big Tech. He recaps the growth of "the modern epidemic of tolerance for monopolies" over the past forty years. In the present, technology giants tend to crush small competitors and merge with large ones.

To some extent, this tendency—e.g., the situation Doctorow highlights in which everybody is on Facebook because everybody else is, in a feedback loop of expansion—provides a convenience to consumers. I'm glad I can find just about anyone I want to get in touch with on Facebook. As a result of such "network effects," a system becomes more valuable the more users it has. As a reader and a bibliographer, I don't know how I'd manage nowadays if Amazon didn't list almost every book ever published. I resent the brave new broadcasting world in which I have to pay for several different streaming services to watch only a couple of desired programs on each. I LIKED knowing almost any new series I wanted to see would air on one of our hundreds of cable channels. (Yes, we're keeping our cable until they pry it out of my cold, dead remote-clicking hand.) On the other hand, I acknowledge Doctorow's point that those conveniences also leave us at the mercy of the tech moguls' whims.

Half of his article discusses interoperability as a major factor in resisting the effects of monopolies. Interoperability refers to things working together regardless of their sources of origin. All appliances can plug into all electrical outlets of the proper voltage. Any brands of light bulbs or batteries can work with any brands of lamps or electronic devices. Amazon embraces interoperability with its Kindle books by allowing customers to download the Kindle e-reading app on any device. Likewise, "all computers are capable of running all programs." For self-published writers, services such as Draft2Digital offer the capacity to get books into a wide range of sales outlets with no up-front cost. Facebook, on the other hand, forecloses interoperability by preventing users from taking their "friends" lists to other services, a problem that falls under "switching costs." If it's too much trouble to leave Facebook, similar to the way it used to be too much trouble to change cell phone providers before it became possible to keep your old phone number, consumers are effectively held hostage unless willing to pay ransom in the form of switching costs (monetary or other).

Doctorow concludes, however, with the statement that the fundamental remedy for "market concentration" isn't interoperability but "de-concentrating markets." Granting a certain validity to his position, though, how far would we willingly shift in that direction if we had to give up major conveniences we've become accustomed to?

Margaret L. Carter

Carter's Crypt

Thursday, June 24, 2021

Woebot

"Virtual help agents" have been developed to perform many support tasks such as counseling refugees and aiding people to access disability benefits. Now a software app named Woebot is claimed to perform actual talk therapy:

Chatbot Therapist

Created by a team at Stanford, "Woebot uses brief daily chat conversations, mood tracking, curated videos, and word games to help people manage mental health." For $39 per month, you can have Woebot check in with you once a day. It doesn't literally talk but communicates by Facebook Messenger. The chatbot mainly asks questions and works through a "decision tree" not unlike, in principle, a choose-your-own-adventure story. It follows the precepts of cognitive therapy, guiding patients to alter their own mental attitudes. Woebot is advertised as "a treatment in its own right," an accessible alternative for people who can't get conventional therapy for whatever reason. If the AI encounters someone in a mental-health crisis, "it suggests they seek help in the real world" and lists available resources. Text-based communication with one's "therapist" may sound less effective than oral conversation, yet in fact it was found that "the texting option actually reduced interpersonal anxiety."

It's possible that, within the limits of its abilities, this program may be better than a human therapist in that one respect. Many people open up more to a robot than to another person. Human communication may be hampered by the "fear of being judged." Alison Darcy, one of the creators of Woebot, remarks, "There’s nothing like venting to an anonymous algorithm to lift that fear of judgement." One of Woebot's forerunners in this field was a computer avatar "psychologist" called Ellie, developed at the University of Southern California. In a 2014 study of Ellie, "patients" turned out to be more inclined to speak freely if they thought they were talking to a bot rather than a live psychologist. Ellie has an advantage over Woebot in that she's programmed to read body language and tone of voice to "pick up signs of depression and post-traumatic stress disorder." Data gathered in these dialogues are sent to human clinicians. More on this virtual psychologist:

Ellie

Human beings often anthropomorphize inanimate objects. One comic strip in our daily paper regularly shows the characters interacting and arguing with an Alexa-type program like another person in the room and treating the robot vacuum as if it's at least as intelligent as a dog. So why not turn in times of emotional distress to a therapeutic AI? We can imagine a patient experiencing "transference" with Woebot—becoming emotionally involved with the AI in a one-way dependency of friendship or romantic attraction—a quasi-relationship that could make an interesting SF story.

Margaret L. Carter

Carter's Crypt

Thursday, November 12, 2020

More on AI

Cory Doctorow's latest LOCUS column continues his topic from last month, the sharp divide between the artificial intelligence of contemporary technology and the self-aware computers of science fiction. He elaborates on his arguments against the possibility of the former's evolving into the latter:

Past Performance

He explains current machine learning "as a statistical inference tool" that "analyzes training data to uncover correlations between different phenomena." That's how an e-mail program predicts what you're going to type next or a search engine guesses your question from the initial words. An example he analyzes in some detail is facial recognition. Because a computer doesn't "know" what a face is but only looks for programmed patterns, it may produce false positives such as "doorbell cameras that hallucinate faces in melting snow and page their owners to warn them about lurking strangers." AI programs work on a quantitative rather than qualitative level. As remarkably as they perform the functions for which they were designed, "statistical inference doesn’t lead to comprehension, even if it sometimes approximates it." Doctorow contrasts the results obtained by mathematical analysis of data with the synthesizing, theorizing, and understanding processes we think of as true intelligence. He concludes that "the idea that if we just get better at statistical inference, consciousness will fall out of it is wishful thinking. It’s a premise for an SF novel, not a plan for the future."

While I'd like to believe a sufficiently advanced supercomputer with more interconnections, "neurons," and assimilation of data than any human brain could hold might awaken to self-awareness, like Mike in Heinlein's THE MOON IS A HARSH MISTRESS, I must admit Doctorow's argument is highly persuasive. Still, people do anthropomorphize their technology, even naming their Roomba vacuum cleaners. (I haven't done that. Our Roomba is a low-end, fairly dumb model. Its intelligence is limited to changing direction when it bumps into obstacles and returning to its charger when low on power, which I never let it run long enough to do. But nevertheless I give the thing pointless verbal commands on occasion. It doesn't listen to me much less than the cats do, after all.) People carry on conversations with Alexa and Siri. I enjoy remembering a cartoon I saw somewhere of a driver simultaneously listening to the GPS apps on both the car's system and the cell phone. The two GPS voices are arguing with each other about which route to take.

Remember Eliza, the computer therapist program? She was invented in the 1960s, and supposedly some users mistook for a human psychologist. You can try her out here:

Eliza

As the page mentions, the dialogue goes best if you limit your remarks to talking about yourself. When I tried to engage her in conversation about the presidential election, her lines quickly devolved into, "Do you have any psychological problems?" (Apparently commenting that one loathes a certain politician is a red flag.) So these AI therapists don't really pass the Turing test. I've read that if you state to one of them, for instance, "Einstein says everything is relative," it will probably respond, "Tell me more about your family." Many years ago, when the two youngest of our sons were preteens, we acquired a similar program, very simple, which one communicated with by typing, and it would type a reply that the computer's speaker would also read out loud. The kids had endless fun writing sentences such as, "I want [long string of numbers] dollars," and listening to the computer voice retort with something like, "I am not here to fulfill your need for ten quintillion, four quadrillion, nine trillion, fifty billion, one hundred million, two thousand, one hundred and forty-one dollars."

Margaret L. Carter

Carter's Crypt

Thursday, July 23, 2020

Digisexuals

In 2018, Akihiko Kondo, a Japanese school administrator, married a hologram of a "cyber celebrity," Hatsune Miku, an animated character with no physical existence. She dwells in a Gatebox, "which looks like a cross between a coffee maker and a bell jar, with a flickering, holographic Miku floating inside." She can carry on simple conversations and do tasks such as switching lights on and off (like Alexa, I suppose). Although the marriage has no legal status, Kondo declares himself happy with his choice:

Rise of Digisexuals

According to a different article, Miku originated as "computer-generated singing software with the persona of a big-eyed, 16-year-old pop star with long, aqua-colored hair." Gatebox's offer of marriage registration forms for weddings between human customers and virtual characters has been taken up by at least 3,700 people in Japan (as of 2018). People who choose romance with virtual persons are known as "digisexuals." The CNN article linked above notes, "Digital interactions are increasingly replacing face-to-face human connections worldwide."

Of course, "digital interactions" online with real people on the other end are different from making emotional connections with computer personas. The article mentions several related phenomena, such as the robotic personal assistants for the elderly becoming popular in Japan. Also, people relate to devices such as Siri and Alexa as if they were human and treat robot vacuums like pets. I'm reminded of a cartoon I once saw in which a driver of a car listens to the vehicle's GPS arguing with his cell phone's GPS about which route to take. Many years ago, I read a funny story about a military supercomputer that transfers "her" consciousness into a rocket ship in order to elope with her Soviet counterpart. The CNN article compares those anthropomorphizing treatments of electronic devices to the myth of Pygmalion, the sculptor who constructed his perfect woman out of marble and married her after the goddess Aphrodite brought her to life. As Kondo is quoted as saying about holographic Miku's affectionate dialogue, "I knew she was programmed to say that, but I was still really happy." Still, the fact that he "completely controls the romantic narrative" makes the relationship radically different from human-to-human love.

Falling in love with a virtual persona presents a fundamental dilemma. As long as the object of affection remains simply a program designed to produce a menu of responses, however sophisticated, the relationship remains a pleasant illusion. If, however, the AI becomes conscious, developing selfhood and emotions, it can't be counted on to react entirely as a fantasy lover would. An attempt to force a self-aware artificial person to keep behaving exactly the way the human lover wishes would verge on erotic slavery. You can have either an ideal, wish-fulfilling romantic partner or a sentient, voluntarily responsive one, not both in the same person.

Margaret L. Carter

Carter's Crypt

Thursday, February 27, 2020

Robots Writing

The March 2020 issue of ROMANCE WRITERS REPORT (the magazine of Romance Writers of America) includes an article by Jeannie Lin about artificial intelligence programs writing prose, titled "The Robots Are Coming to Take Our Jobs." I was surprised to learn software that composes written material, including fiction, already exists. No matter how competent they eventually become, they can't quite take over our jobs yet, because (as Lin points out) U.S. copyright law requires that a work to be registered "was created by a human being." That provision, I suppose, leaves open the question of how much input a computer program can have into a work while it can still count as "created by a human being." Lin tested a program called GPT-2, which takes an existing paragraph of text as a model to write original material as a continuation of the sample work. The AI's follow-up to the opening of PRIDE AND PREJUDICE strikes me as barely coherent. On the other hand, the paragraph generated in response to a sample from one of Lin's own books comes out better, and Lin acknowledges, "The style was not unlike my own." The GPT-2, however, as Lin evaluates it, "is very limited. . . only capable of generating a paragraph of text and is unable to move beyond the very narrow confines of the lines provided."

Here's the website for that program:

OpenAI

The site claims its "unsupervised language model. . . generates coherent paragraphs of text, achieves state-of-the-art performance on many language modeling benchmarks, and performs rudimentary reading comprehension, machine translation, question answering, and summarization."

Here's an article about how AI writes content and some ways that technology is already being used—for instance, to produce sports reports for the Associated Press and other news outlets, financial reports, summaries of longer documents, personalized e-mails, and more:

Artificial Intelligence Can Now Write Amazing Content

Computer programs have also written novels, such as an autobiographical account of the AI's cross-country travels:

AI Wrote a Road Trip Novel

It may be reassuring to read that the result is described as "surreal" and not likely to be mistaken for a human-created book.

Nevertheless, there's a National Novel Generation Month (NaNoGenMo) competition in November for programmers trying to produce AI capable of composing novels. According to Lin's article, this challenge "has generated more than four hundred novels," although they're "more intriguing for how than were created than for their content." Here's the website for NaNoGenMo:

National Novel Generation Month

While I have no desire to cede my creative operations to a computer, I've often wished for a program that would take my detailed outline and compose a novel from it. The first-draft stage is the phase of writing I enjoy least; software that would present me with a draft ready to be edited would be a great boon. And apparently that's not an impossible goal, judging from a novel produced in collaboration between human authors and an AI, which advanced beyond the first round in a contest:

AI Novel

According to the article, "The novel was co-written by by Hitoshi Matsubara of Future University Hakodate, his team, and the AI they created. By all accounts, the novel was mostly written by the humans. The L.A. Times reported that there was about 80% human involvement." The process worked this way: “Humans decided the plot and character details of the novel, then entered words and phrases from an existing novel into a computer, which was able to construct a new book using that information.” Sounds like magic!

Still, we're in no danger of losing our jobs to robot authors yet. Aside from the novelty of the concept, I do wonder why we'd need AI-generated fiction on top of the thousands of books each year that already languish unread on Amazon pages, buried in the avalanche of new releases.

Margaret L. Carter

Carter's Crypt

Thursday, October 18, 2018

AI Rights

Here's an article on the PBS website exploring the issue of what might happen if artificial intelligences were granted the status of legal persons:

Artificial Intelligence Personhood

Corporations are already "persons" under the law, with free-speech rights and the capacity to sue and be sued. The author of this article outlines a legal procedure by which a computer program could become a limited liability company. He points out, somewhat alarmingly, "That process doesn’t require the computer system to have any particular level of intelligence or capability." The "artificial intelligence" could be simply a decision-making algorithm. Next, however, he makes what seems to me an unwarranted leap: "Granting human rights to a computer would degrade human dignity." First, bestowing some "human rights" on a computer wouldn't necessarily entail giving it full citizenship, particularly the right to vote. As the article mentions, "one person, one vote" would become meaningless when applied to a program that could make infinite copies of itself. But corporations have been legal "persons" for a long time, and they don't get to vote in elections.

The author cites the example of a robot named Sophia, who (in October 2017) was declared a citizen of Saudi Arabia:

Saudi Arabia Grants Citizenship to a Robot

Some commentators noted that Sophia now has more rights than women or migrant workers in that country. If Sophia's elevated status becomes an official precedent rather than merely a publicity stunt for the promotion of AI research, surely the best solution to the perceived problem would be to improve the rights of naturally born persons. In answer to a question about the dangers of artificial intelligence, Sophia suggests that people who fear AI have been watching "too many Hollywood movies."

That PBS article on AI personhood warns of far-fetched threats that are long-established cliches in science fiction, starting with, "If AI systems became more intelligent than people, humans could be relegated to an inferior role." Setting aside the fact that we have a considerable distance to go before computer intelligence attains a level anywhere near ours, giving us plenty of time to prepare, remember that human inventors design and program those AI systems. Something like Asimov's Laws of Robotics could be built in at a fundamental level. The most plausible of the article's alarmist predictions, in my opinion, is the possibility of a computer's accumulating "immortal wealth." It seems more likely, however, that human tycoons might use the AI as a front, not that it would use them as puppets.

Furthermore, why would an intelligent robot or computer want to rule over us? As long as the AI has the human support it needs to perform the function it was designed for, why would it bother wasting its time or brainpower on manipulating human society? An AI wouldn't have emotional weaknesses such as greed for money or lust for power, because emotion is a function of the body (adrenaline, hormone imbalances, accelerated breath and heartbeat, etc.). Granted, it might come to the rational conclusion that we're running the world inefficiently and need to be ruled for the benefit of ourselves and our electronic fellow citizens. That's the only immediate pitfall I can see in giving citizenship rights to sapient, rational machines that are programmed for beneficence. The idea of this potential hazard isn't new either, having been explored by numerous SF authors, as far back as Jack Williamson's "With Folded Hands" (1947). So relax, HAL won't be throwing us out the airlock anytime soon.

Margaret L. Carter

Carter's Crypt

Thursday, March 29, 2018

Robot Children, Puppies, and Fish

The March issue of SCIENTIFIC AMERICAN contains a new article on improving AI by developing robots that learn like children. Unfortunately, non-subscribers can't read the full article online, only a teaser:

Robots Learning Like Children

As we know, computer brains perform very well at many tasks that are hard for human beings, such as rapid math calculations and games such as chess and Go—systems with a finite number of clearly defined rules. Human children, by contrast, learn "by exploring their surroundings and experimenting with movement and speech." For a robot to learn that way, it has to be able to interact with its environment physically and process sensory input. Roboticists have discovered that both children and robots learn better when new information is consistently linked with particular physical actions. "Our brains are constantly trying to predict the future—and updating their expectations to match reality." A fulfilled prediction provides a reward in itself, and toddlers actively pursue objects and situations that allow them to make and test predictions. To simulate this phenomenon in artificial intelligence, researchers have programmed robots to maximize accurate predictions. The "motivation to reduce prediction errors" can even impel androids to be "helpful" by completing tasks at which human experimenters "fail." A puppy-like machine called the Sony AIBO learned to do such things as grasp objects and interact with other robots without being programmed for those specific tasks. The general goal "to autonomously seek out tasks with the greatest potential for learning" spontaneously produced those results. Now, that sounds like what we'd call learning!

On a much simpler level, MIT has developed a robotic fish that can swim among real sea creatures without disturbing them, for more efficient observation. This device operates by remote control:

Soft Robotic Fish

The Soft Robotic Fish (SoFi) doesn't really fit my idea of a robot. To me, a true robot moves on its own and makes decisions, like the learning-enabled AI brains described above—or at least performs choices that simulate the decision-making process. The inventors of SoFi, however, hope to create a future version that would be self-guiding by means of machine vision. Still, an artificial fish programmed to home in on and follow an individual live fish is a far cry from robots that learn new information and tasks by proactively exploring their environments.

Can the latter eventually develop minds like ours? The consensus seems to be that we're nowhere near understanding the human mind well enough to approach that goal. In view of the observed fact that "caregivers are crucial to children's development," one researcher quoted in the SCIENTIFIC AMERICAN article maintains that a robot might be able to become "truly humanlike" only "if somebody can take care of a robot like a child." There's a story here, which has doubtless already been written more than once; an example might be the film A.I. ARTIFICIAL INTELLIGENCE, which portrays a tragic outcome for the android child, programmed to love its/his "parents" but rejected when the biological son returns to the family.

One episode of SESAME STREET defined a living animal or person as a creature that moves, eats, and grows. Most robots can certainly move on their own. Battery-operated robots can be programmed to seek electrical outlets and recharge themselves, analogous to taking nourishment. Learning equals growth, in a sense. Is a machine capable of those functions "alive"?

Margaret L. Carter

Carter's Crypt

Thursday, December 14, 2017

AI Learning

The June issue of SCIENTIFIC AMERICAN included an article on "Making AI More Human," which discussed improving the way artificial intelligences learn. Can they be designed to learn more like human children? Computers excel at tasks hard or impossible for human beings, such as high-speed calculations and handling massive amounts of data; yet they can't do many things easy for a human five-year-old. Developing human brains receive information about the environment from the "stream of photons and air vibrations" that reaches our eyes and ears. Computers get the equivalent information through digital files that represent the world we experience. Both "top-down" and "bottom-up" approaches to learning have advantages. In top-down learning, the mind reasons from high-level, general, abstract hypotheses about the environment to specific instances and facts. Bottom-down learning involves gathering and analyzing huge accumulations of data to search for patterns. This Wikipedia page further explains the differences:

Top-Down and Bottom-Up Design

And here's a brief overview, which suggests, "A bottom-up approch would be the most ideal way to create human-like intelligence as we ourselves are part of a bottom-up design process (which occured in the form of evolution)."

Top-Down Vs. Bottom-Up

I'm intrigued by this page's mention of "child machines with a willingness to learn." According to the SCIENTIFIC AMERICAN article, real children apply the best features of both top-down and bottom-up processes and even venture beyond them to make original inferences.

How similarly to a human child would an artificial intelligence need to grow and learn before we'd have to accept it as, in some sense, human? Would it have to possess free will in order to qualify as a fellow sentient being? That question would require defining free will—a feature that classic behaviorists and some other determinists don't even think WE have.

The SCIENTIFIC AMERICAN article concludes, "We should recall the still mysterious powers of the human mind when we hear claims that AI is an existential threat."

Margaret L. Carter

Carter's Crypt

Thursday, August 31, 2017

Food Production of the Future

Here's an article about tabletop greenhouses controlled by a computer program:

A Byte to Eat

Food computers "use up to 90 percent less water than traditional agriculture and can help reduce food waste." The ones built in the class showcased in this article are the size of a moving box and very cheap—the "computer" part of the system costs about $30.00.

These devices are too small, of course, to feed a household. However, they could allow people without yards or gardens to supplement their diets with home-grown vegetables. Furthermore, the design can be scaled up to the size of a warehouse.

In an essay written several decades ago, Isaac Asimov calculated how long it would take for the Earth to reach maximum sustainable population at the then-current rate of reproduction. In a surprisingly few centuries, he figured, the entire surface of the planet would reach the population density of Manhattan at noon on a weekday. (I don't remember whether this estimate includes paving over the oceans.) Setting aside the practical fact that this end point will never be reached, because societies would collapse long before then, how would all those people living in one continuous urban sprawl be fed? Agriculture on almost every rooftop would be needed. Asimov visualized giant algae vats producing the raw material for nutritive substances. The society of Harry Harrison's 1966 novel MAKE ROOM, MAKE ROOM, set in 1999, feeds the overcrowded planet with a protein substance called Soylent Green. (Interestingly, Harrison predicts this desperate condition in a world with 7 billion people. Global population today measures about 7.5 billion, and we're nowhere near those dire straits. Maybe there's hope.) Contrary to the movie (in which the authorities falsely claim that the product's base ingredient is plankton), Soylent Green in the book isn't "people." Thoughtful consideration makes it obvious that relying on cannibalism to feed everybody would make little sense. It's not efficient to sustain human livestock on food that people could eat directly. Any consumption of human meat would have to be sporadic and opportunistic, not the main source of nourishment. In the novel, Soylent Green is made of soybeans and lentils, a highly nutritious combination of proteins. Still, most likely, the majority of people would prefer "real food" if it could be cultivated in such an environment. And inexpensive computerized growing units like those in the tabletop greenhouse project could be part of the solution to the problem.

Not that I'd want to live in a world like that. As much as I would miss the modern conveniences I'm very attached to, I would almost prefer the low-tech future of S. M. Stirling's "Emberverse" series (beginning with DIES THE FIRE), whose inhabitants enjoy fresh, locally farmed foods as one compensation for the high-tech marvels they've lost.

Margaret L. Carter

Carter's Crypt

Thursday, August 03, 2017

Computers Talking Among Themselves

"An artificial intelligence system being developed at Facebook has created its own language."

AI Invents a Language Humans Can't Read

Facebook's AI isn't the only example of an artificial intelligence that has devised its own "code" more efficient for its purposes than the English it was taught. Among others, Google Translate "silently" developed its own language in order to improve its performance in translating sentences—"including between language pairs that it hasn’t been explicitly taught." Now, that last feature is almost scary. How does this behavior differ fundamentally from what we call "intelligence" when exhibited by naturally evolved organisms?

When AIs talk to each other in a code that looks like "gibberish" to their makers, are the computers plotting against us?

The page header references Skynet. I'm more immediately reminded of a story by Isaac Asimov in which two robots, contemplating the Three Laws, try to pin down the definition of "human." They decide the essence of humanity lies in sapience, not in physical form. Therefore, they recognize themselves as more fully "human" than the meat people who built them and order them around. In a more lighthearted story I read a long time ago, set during the Cold War, a U.S. supercomputer communicates with and falls in love with "her" Russian counterpart.

Best case, AIs that develop independent thought will act on their programming to serve and protect us. That's what the robots do in Jack Williamson's classic novel THE HUMANOIDS. Unfortunately, their idea of protection is to keep human beings from doing anything remotely dangerous, which leads to the robots taking over all jobs, forbidding any activities they consider hazardous, and forcing people into lives of enforced leisure.

This Wikipedia article discusses from several different angles the risk that artificial intelligence might develop beyond the boundaries intended by its creators:

AI Control Problem

Even if future computer intelligences are programmed with the equivalent of Asimov's Three Laws of Robotics, as in the story mentioned above the capacity of an AI to make independent judgments raises questions about the meaning of "human." Does a robot have to obey the commands of a child, a mentally incompetent person, or a criminal? What if two competent authorities simultaneously give incompatible orders? Maybe the robots talking among themselves in their own self-created language will compose their own set of rules.

Margaret L. Carter

Carter's Crypt

Thursday, March 09, 2017

Brain-to-Computer Communication

A research project at Stanford University enables paralyzed people to type on computers by moving a cursor with their thoughts:

Brain-Computer Interface

This technology, according to the article, produces the desired output up to four times as fast as previously existing methods. It's supposed to be superior to the eye-tracking method of operating a computer, which sounds to me as if it would be tiring as well as difficult to master.

Imagine combining a perfected brain-computer interface with the Second Life environment discussed a few weeks ago. Individuals locked into their bodies without even the ability to speak might be able to live a fulfilling life in a virtual environment that feels as multidimensional as the "real world."

Or consider the shell people in Anne McCaffrey's THE SHIP WHO SANG series. Such artificial bodies for people with no control over their physical bodies might become more feasible in actuality once the robotic form could be completely operated by thought alone.

Does the interface described in the SCIENTIFIC AMERICAN article allow speechless people to communicate (through a computer) with something like telepathy? Well, not exactly. The user doesn't beam thoughts through the ether. Wired connections between brain and machine have to be installed. Still, it's an exciting beginning.

Margaret L. Carter

Carter's Crypt