Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Thursday, January 11, 2024

Robotic Companions

A robotic device called ElliQ, which functions as an AI "companion" for older people, is now available for purchase by the general public at a price of only $249.99 (plus a monthly subscription fee):

Companion Robot

As shown in the brief video on this page, "she" has a light-up bobble-head but no face. Her head turns and its light flickers in rhythm with her voice, which in my opinion is pleasant and soothing. The video describes her as "empathetic." From the description of the machine, it sounds to me like a more advanced incarnation of inanimate personal assistants similar to Alexa (although I can't say for sure because I've never used one). The bot can generate displays on what looks like the screen of a cell phone. ElliQ's makers claim she "can act as a proactive tool to combat loneliness, interacting with users in a variety of ways." She can remind people about health-related activities such as exercising and taking medicine, place video calls, order groceries, engage in games, tell jokes, play music or audiobooks, and take her owner on virtual "road trips," among other services. She can even initiate conversations by asking general questions.

Here's the manufacturer's site extolling the wonders of ElliQ:

ElliQ Product Page

They call her "the sidekick for healthier, happier aging" that "offers positive small talk and daily conversation with a unique, compassionate personality." One has to doubt the "unique" label for a mass-produced, pre-programmed companion, but she does look like fun to interact with. I can't help laughing, however, at the photo of ElliQ's screen greeting her owner with "Good morning, Dave." Haven't the creators of this ad seen 2001: A SPACE ODYSSEY? Or maybe they inserted the allusion deliberately? I visualize ElliQ locking the client in the house and stripping the premises of all potentially dangerous features.

Some people have reservations about devices of this kind, naturally. Critics express concerns that dependence on bots for elder care may be "alienating" and actually increase the negative effects of isolation and loneliness. On the other hand, in my opinion, if someone has to choose between an AI companion or nothing, wouldn't an AI be better?

I wonder why ElliQ doesn't have a face. Worries about the uncanny valley effect, maybe? I'd think she could be given animated eyes and mouth without getting close enough to a human appearance to become creepy.

If this AI were combined with existing machines that can move around and fetch objects autonomously, we'd have an appliance approaching the household servant robots of Heinlein's novel THE DOOR INTO SUMMER. That book envisioned such marvels existing in 1970, a wildly optimistic notion, alas. While I treasure my basic Roomba, it does nothing but clean carpets and isn't really autonomous. I'm not at all interested in flying cars, except in SF fiction or films. Can you imagine the two-dimensional, ground-based traffic problems we already live with expanded into three dimensions? Could the average driver be trusted with what amounts to a personal aircraft in a crowded urban environment? No flying car for me, thanks -- where's my cleaning robot?

Margaret L. Carter

Please explore love among the monsters at Carter's Crypt.

Thursday, January 04, 2024

AI as a Bubble

Cory Doctorow's latest LOCUS column analyzes AI as a "tech bubble." What Kind of Bubble Is AI?

Although I had a vague idea of what economists mean by "bubble," I looked it up to make sure. I thought of the phenomenon as something that expands quickly and looks pretty but will burst sooner or later. The Wikipedia definition comes fairly close to that concept: "An economic bubble (also called a speculative bubble or a financial bubble) is a period when current asset prices greatly exceed their intrinsic valuation, being the valuation that the underlying long-term fundamentals justify." The term originated with the South Seas Bubble of the early eighteenth century, involving vastly inflated stocks. The Dutch "tulip mania" of the seventeenth century offers another prominent example.

Doctorow takes it for granted that AI fits into this category. He begins his essay with, "Of course AI is a bubble. It has all the hallmarks of a classic tech bubble." He focuses on the question of what KIND of bubble it is. He identifies two types, "The ones that leave something behind, and the ones that leave nothing behind." Naturally, the first type is desirable, the second bad. He analyzes the current state of the field with numerous examples, yet always with the apparent underlying assumption that the "bubble" will eventually "pop." Conclusion: "Our policymakers are putting a lot of energy into thinking about what they’ll do if the AI bubble doesn’t pop – wrangling about 'AI ethics' and 'AI safety.' But – as with all the previous tech bubbles – very few people are talking about what we’ll be able to salvage when the bubble is over."

This article delves into lots of material new to me, since I confess I don't know enough about the field to have given it much in-depth thought. I have one reservation about Doctorow's position, however -- he discusses "AI" as if it were a single monolithic entity, despite the variety of examples he refers to. Can all possible levels and applications of artificial intelligence be lumped together as components of one giant bubble, to endure or "pop" together? Maybe those multitudes of different applications are what he's getting at when he contemplates "what we'll be able to salvage"?

Margaret L. Carter

Please explore love among the monsters at Carter's Crypt.

Thursday, December 14, 2023

Decoding Brain Waves

Can a computer program read thoughts? An experimental project uses AI as a "brain decoder," in combination with brain scans, to "transcribe 'the gist' of what people are thinking, in what was described as a step toward mind reading":

Scientists Use Brain Scans to "Decode" Thoughts

The example in the article discusses how the program interprets what a person thinks while listening to spoken sentences. Although the system doesn't translate the subject's thoughts into the exact same words, it's capable of accurately rendering the "gist" into coherent language. Moreover, it can even accomplish the same thing when the subject simply thinks about a story or watches a silent movie. Therefore, the program is "decoding something that is deeper than language, then converting it into language." Unlike earlier types of brain-computer interfaces, this noninvasive system doesn't require implanting anything in the person's brain.

However, the decoder isn't perfect yet; it has trouble with personal pronouns, for instance. Moreover, it's possible for the subject to "sabotage" the process with mental tricks. Participating scientists reassure people concerned about "mental privacy" that the system works only after it has been trained on the particular person's brain activity through many hours in an MRI scanner. Nevertheless, David Rodriguez-Arias Vailhen, a bioethics professor at Spain's Granada University, expresses apprehension that the more highly developed versions of such programs might lead to "a future in which machines are 'able to read minds and transcribe thought'. . . warning this could possibly take place against people's will, such as when they are sleeping."

Here's another article about this project, explaining that the program functions on a predictive model similar to ChatGPT. As far as I can tell, the system works only with thoughts mentally expressed in words, not pure images:

Brain Activity Decoder Can Read Stories in People's Minds

Researchers at the University of Texas in Austin suggest as one positive application that the system "might help people who are mentally conscious yet unable to physically speak, such as those debilitated by strokes, to communicate intelligibly again."

An article on the Wired site explores in depth the nature of thought and its connection with language from the perspective of cognitive science.

Decoders Won't Just Read Your Mind -- They'll Change It

Suppose the mind isn't, as traditionally assumed, "a self-contained, self-sufficient, private entity"? If not, is there a realistic risk that "these machines will have the power to characterize and fix a thought’s limits and bounds through the very act of decoding and expressing that thought"?

How credible is the danger foreshadowed in this essay? If AI eventually gains the power to decode anyone's thoughts, not just those of individuals whose brain scans the system has been trained on, will literal mind-reading come into existence? Could a future Big Brother society watch citizens not just through two-way TV monitors but by inspecting the contents of their brains?

Margaret L. Carter

Please explore love among the monsters at Carter's Crypt.

Thursday, September 21, 2023

Character Brainstorming with AI

Here's a WRITER'S DIGEST article about how an author might use ChatGPT as an aid to composition without actually having the program do the writing:

Using AI to Develop Characters

The author, Laura Picklesimer, describes her experiment in workshopping character ideas with the help of generative AI. She began by asking the program how it might be able to help in character creation, and it generated a list of ten quite reasonable although not particularly exciting possibilities. She then implemented one of the suggestions by requesting ideas for characters in a thriller set in 1940s Los Angeles. The result consisted of "a host of rather stereotypical characters." When she asked the AI to suggest ways to subvert those characters, she was more impressed with the answers. Reading that list, I agree something like it might actually be useful in sparking story ideas. Her advice to writers who consider using such a program includes being "as specific as possible with your prompts, making use of key words and specifying how long ChatGPT’s response should be." She also points out, "It may take multiple versions of a prompt to arrive at a helpful response."

I was intrigued to learn that a program called Character.AI can be set up to allow a writer to carry on a conversation with a fictional character, either from literature or one of her own creations. The article shows a couple of examples.

Picklesimer also cautions potential users against the limitations of systems such as ChatGPT, including their proneness to "hallucinations." When she asked the AI about its own limitations, it answered honestly and in detail. Most importantly for creative writers, in my opinion, it can easily perpetuate stereotypes, cliches, and over-familiar tropes. It also lacks the capacity for emotional depth and comlexity, of course. If an author keeps these cautions in mind, though, I think experimenting with such programs a brainstorming tools could be fun and potentially productive -- just as a search in a thesaurus might not turn up the word you're looking for but might surprise you with a better idea.

It's worth noting, however, that this essay links to another one titled "Why We Must Not Cede Writing to the Machines" -- which Picklesimer, of course, doesn't advocate doing.

Do Not Go Gentle

Margaret L. Carter

Please explore love among the monsters at Carter's Crypt.

Thursday, September 14, 2023

AI Compositions and Their Influence on Letters as Signals

In Cory Doctorow's latest column, he brings up a potential unintended byproduct of overusing "large language models," aka chatbots such as ChatGPT:

Plausible Sentence Generators

He recalls a recent incident when he wrote a letter of complaint to an airline, threatening to sue them in small claims court, and fed it to such a program for rewriting. He was surprised at the high quality of the result. The site changed his pretty good "legal threat letter" into a noticeably stronger "vicious lawyer letter."

Letters of that type, as well as another of his examples, letters of recommendation from college professors, are performative. They transmit not only information but "signals," as Doctorow puts it. A stern letter from a lawyer sends the message that somebody cares enough about an issue to spend a considerable amount of money hiring a professional to write the letter. A recommendation from a professor signals that the he or she considers the student worthy of the time required to write the recommendation.

One of Spider Robinson's Callahan's Bar stories mentions a similar performative function that shows up in an oral rather than written format, spousal arguments. The winner of the argument is likely to be the one who dramatizes his or her emotional investment in the issue with more demonstrative passion than the other spouse.

In the case of written performances, Doctorow speculates on what will happen if AI-composed (or augmented) epistles become common. When it becomes generally known that it's easy and inexpensive or free to write a letter of complaint or threat, such messages won't signal the serious commitment they traditionally do. Therefore, they'll become devalued and probably won't have the intended impact. The messages (like form letters, though Doctorow doesn't specifically mention those) will lack "the signal that this letter was costly to produce, and therefore worthy of taking into consideration merely on that basis."

I'm reminded of the sample letters to congresscritters included in issues of the MILITARY OFFICER magazine whenever Congress is considering legislation that will have serious impact on members of the armed services and their families. These form letters are meant to be torn out of the magazine, signed, and mailed by subscribers to the presiding officers of the House and Senate. But, as obvious form letters, they clearly don't take much more effort than e-mails -- some, because envelopes must be addressed and stamps affixed, but not much more. So how much effect on a legislator's decision can they have?

Miss Manners distinctly prefers old-fashioned, handwritten thank-you notes over e-mailed thanks because the former show that the recipient went to a certain amount of effort. I confess I do send thank-you notes by e-mail whenever possible. The acknowledgment reaches the giver immediately instead of at whatever later time I work up the energy to getting around to it. So, mea culpa, I plead guilty! However, the senders of the gifts themselves have almost completely stopped writing snail-mail letters, so in communication with them, e-mail doesn't look lazy (I hope), just routine. Context is key.

Margaret L. Carter

Please explore love among the monsters at Carter's Crypt.

Thursday, July 27, 2023

Gray Goo Doomsday?

Could runaway nanobots take over the planet?

"Gray goo is a term used to describe a lifeless world completely occupied by self-replicating nanomaterials that have consumed the energy of all life forms due to uncontrolled replication."

The complete explanation:

Definition of Gray Goo

A longer, more technical treatment on Wikipedia:

Wikipedia: Gray Goo

I came across the term in a short piece in the BALTIMORE SUN this past Sunday. Discovering how long this idea has been around, I was surprised I hadn't heard of it before. Unrestrained nanobot proliferation is compared to runaway generative AI. The example given in the newspaper refers to ChatGPT trying to be funny. When asked to tell a joke, the program falls back on the same twenty-five jokes over and over, about 90% of the time. If this example is typical of the effect of artificial intelligence on communication, could ever-increasing dependence on AI lead to decreasing originality and creativity? The sidebar in the SUN is an excerpt from this essay:

If Generative AI Runs Rampant

While I don't necessarily think we're doomed yet, this hypothetical scenario about the long-term effects of overuse of AI in creative work does raise disturbingly plausible concerns. As far as the basic viewing-with-alarm "gray goo" scenario is concerned, there's an obvious counter-argument: Nanobots couldn't reproduce uncontrollably unless we first invent them and then release them into the wild without safeguards, similar to Asimov's Three Laws of Robotics. So we probably won't have to worry about getting smothered in goo anytime soon.

Margaret L. Carter

Please explore love among the monsters at Carter's Crypt.

Thursday, June 15, 2023

The Internet Knows All

This week I acquired a new HP computer to replace my old Dell, which had started unpredictably freezing up at least once per day. Installing Windows 11 didn't fix it. It had reached the point where even CTRL-ALT-DEL didn't unfreeze it; I had to turn it off manually and restart every time it failed. It feels great to have a reliable machine again.

Two things struck me about the change: First, the price of the new one, bundled with a keyboard and mouse, about $500. Our first computer, an Apple II+ purchased as a gift at Christmas of 1982, cost over $2000 with, naturally, nowhere near the capabilities of today's devices. No hard drive, no Windows or Apple equivalent therof, and of course no internet. And in that year $2000 was worth a whole lot more than $2000 now. Imagine spending today's equivalent in 2023 dollars for a home electronic device. Back then, it was a serious financial decision that put us into debt for a long time. Thanks to advances in technology, despite inflation some things DO get cheaper. An amusing memory: After unveiling the wondrous machine in 1982, my husband decreed, "The kids are never going to touch this." LOL. That rule didn't last long! Nowadays, in contrast, we'd be lost if we couldn't depend on our two youngest offspring (now middle-aged) for tech support.

The second thing that struck me after our daughter set up the computer: How smoothly and, to my non-tech brain, miraculously, Windows and Google Chrome remembered all my information from the previous device. Bookmarks, passwords, document files (on One Drive), everything I needed to resume work almost as if the hardware hadn't been replaced. What a tremendous convenience. On the other hand, it's a little unsettling, too. For me, the most eerie phenomenon is the way many websites know information from other websites they have no connection to. For example, the weather page constantly shows me ads for products I've browsed on Amazon. Sometimes it seems that our future AI overlords really do see all and know all.

In response to recent warnings about the "existential threat" posed by AI, science columnist Keith Tidman champions a more optimistic view:

Dark Side to AI?

He points out the often overlooked difference between weak AI and strong AI. Weak AI, which already exists, isn't on the verge of taking over the world. Tidman, however, seems less worried about the subtle dangers of the many seductively convenient features of the current technology than most commentators are. As for strong AI, it's not here yet, and even if it eventually develops human-like intelligence, Tidman doesn't think it will try to dominate us. He reminds us, "At the moment, in some cases what’s easy for humans to do is extraordinarily hard for machines to do, while the converse is true, too." If this disparity "evens out" in the long run, he nevertheless believes, "Humans won’t be displaced, or harmed, but creative human-machine partnerships will change radically for the better."

An amusing incidental point about this article: On the two websites I found by googling for it, one page is headlined, "There Is Inevitable Dark Side to AI" and the other, "There Is No Inevitable Dark Side to AI." So even an optimistic essay can be read pessimistically! (Unless the "No" was just accidentally omitted in the first headline. But it still looks funny.)

Margaret L. Carter

Carter's Crypt

Thursday, June 08, 2023

Existential Threat?

As you may have seen in the news lately, dozens of experts in artificial intelligence have supported a manifesto claiming AI could threaten the extinction of humanity:

AI Could Lead to Extinction

Some authorities, however, maintain that this fear is overblown and "a distraction from issues such as bias in systems that are already a problem" and other "near-term harms."

Considering the "prophecies of doom" in detail, we find that the less radically alarmist doom-sayers aren't talking about Skynet, HAL 9000, or even self-aware Asimovian robots circumventing the Three Laws to dominate their human creators. More immediately realistic warnings call attention to risks posed by such things as the "deep fake" programs Rowena discusses in her recent post. In the near future, we could see powerful AI "drive an exponential increase in the volume and spread of misinformation, thereby fracturing reality and eroding the public trust, and drive further inequality, particularly for those who remain on the wrong side of the digital divide."

On the other hand, a member of an e-mail list I subscribe to has written an essay maintaining that the real existential threat of advanced AI doesn't consist of openly scary threats, but irresistibly appealing cuteness:

Your Lovable AI Buddy

Suppose, in the near future, everyone has a personal AI assistant, more advanced and individually programmed than present day Alexa-type devices? Not only would this handheld, computerized friend keep track of your schedule and appointments, preorder meals from restaurants, play music and stream videos suited to your tastes, maybe even communicate with other people's AI buddies, etc., "It knows all about you, and it just wants to make you happy and help you enjoy your life. . . . It would be like a best friend who’s always there for you, and always there. And endlessly helpful." As he mentions, present-day technology could probably create a device like that now. And soon it would be able to look much more lifelike than current robots. Users would get emotionally attached to it, more so than with presently available lifelike toys. What could possibly be the downside of such an ever-present, "endlessly helpful" friend or pet?

Not so fast. If we're worried about hacking and misinformation now, think of how easily our hypothetical AI best friend could subtly shape our view of reality. At the will of its designers, it could nudge us toward certain political or social viewpoints. It could provide slanted, "carefully filtered" answers to sensitive questions. This development wouldn't require "a self-aware program, just one that seems to be friendly and is capable of conversation, or close enough." Building on its vast database of information collected from the internet and from interacting with its user, "It wouldn’t just be trained to emotionally connect with humans, it would be trained to emotionally manipulate humans."

In a society with a nearly ubiquitous as well as almost omniscient product like that, the disadvantaged folks "on the wrong side of the digital divide" who couldn't afford one might even be better off, at least in the sense of privacy and personal freedom.

Margaret L. Carter

Carter's Crypt

Thursday, April 13, 2023

How Will AI Transform Childhood?

According to columnist Tyler Cowen, "In the future, middle-class kids will learn from, play with and grow attached to their own personalized AI chatbots."

I read this essay in our local newspaper a couple of weeks ago. Unfortunately, I wasn't able to find the article on a site that didn't require registering for an account to read it. The essence of its claim is that "personalized AI chatbots" will someday, at a not too far distant time, become as ubiquitous as pets, with the advantage that they won't bite. Parents will be able to control access to content (until the kid learns to "break" the constraints or simply borrows a friend's less restricted device) and switch off the tablet-like handheld computers remotely. Children, Cowen predicts, will love these; they'll play the role of an ever-present imaginary friend that one can really interact with and get a response.

He envisions their being used for game play, virtual companionship, and private AI tutoring (e.g., learning foreign languages much cheaper than from classes or individual tutors) among other applications. I'm sure our own kids would have loved a device like this, if it had been available in their childhood. I probably would have, too, back when dinosaurs roamed the Earth and similar inventions were the wild-eyed, futuristic dreams of science fiction. If "parents are okay with it" (as he concedes at one point), the customized AI companion could be a great boon—with appropriate boundaries and precautions. For instance, what about the risks of hacking?

One thing that worries me, however, isn't even mentioned in the article (if I remember correctly from the paper copy I neglected to keep): The casual reference to "middle-class kids." The "digital divide" has already become a thing. Imagine the hardships imposed on students from low-income families, who couldn't afford home computers, by the remote learning requirements of the peak pandemic year. What will happen when an unexamined assumption develops that every child will have a personal chatbot device, just as many people and organizations, especially businesses and government offices, now seem to assume everybody has a computer and/or a smart phone? (It exasperates me when websites want to confirm my existence by sending me texts; I don't own a smart phone, don't text, and don't plan to start.) Not everybody does, including some who could easily afford them, such as my aunt, who's in her nineties. Those assumptions create a disadvantaged underclass, which could only become more marginalized and excluded in the case of children who don't belong to the cohort of "middle-class kids" apparently regarded as the norm. Will school districts provide free chatbot tablets for pupils whose families fall below a specified income level? With a guarantee of free replacement if the thing gets broken, lost, or stolen?

In other AI news, a Maryland author has self-published a horror book for children, SHADOWMAN, with assistance from the Midjourney image-generating software to create the illustrations:

Shadowman

In an interview quoted in a front-page article of the April 12,2023, Baltimore Sun, she explains that she used the program to produce art inspired by and in the style of Edward Gorey. As she puts it, "I created the illustrations, but I did not hand draw them." She's perfectly transparent about the way the images were created, and the pictures don't imitate any actual drawings by Gorey. The content of each illustration came from her. "One thing that's incredible about AI art," she says, "is that if you have a vision for what you're wanting to make it can go from your mind to being." And, as far as I know, imitating someone else's visual or verbal style isn't illegal or unethical; it's one way novice creators learn their craft. And yet. . . might this sort of thing, using software "trained" on the output of one particular creator, skate closer to plagiarism than some other uses of AI-generated prose and art?

Another AI story in recent news: Digidog, a robot police K-9 informally known as Spot, is being returned to active duty by the NYPD. The robot dog was introduced previously but shelved because some people considered it "creepy":

Robot Dog

Margaret L. Carter

Carter's Crypt

Thursday, February 09, 2023

Creative AI?

There's been a lot of news in the media lately about AI programs that generate text or images. One of the e-mail lists I subscribe to recently had a long thread about AI text products and especially art. Some people argued about whether a program that gets "ideas" (to speak anthropomorphically) from many different online images and combines multiple elements from them to produce a new image unlike any of the sources is infringing artists' copyrights. I tend to agree with the position that such a product is in no sense a "copy" of any particular original.

Here's the Wikipedia article on ChatGPT (Chat Generative Pre-trained Transformer):

ChatCPT

The core function of that program is "to mimic a human conversationalist." However, it does many other language-related tasks, such as "to write and debug computer programs" and "to compose music, teleplays, fairy tales, and student essays" and even "answer test questions," as well as other functions such as playing games and emulating "an entire chat room." It could also streamline rote tasks such as filling out forms. It has limitations, though, which are acknowledged by its designers. Like any AI, it's constrained by its input, and it may sometimes generate nonsense. When asked for an opinion or judgment, the program replies that, being an AI, it doesn't have feelings or opinions.

This week the Baltimore SUN ran an editorial about the potential uses and abuses of the program. It includes a conversation with ChatGPT, asking about various issues of interest to Maryland residents. For instance, the AI offers a list of "creative" uses for Old Bay seasoning. It produces grammatically correct, coherent prose but tends to answer in generalizations that would be hard to disagree with. One drawback is that it doesn't provide attribution or credit for its sources. As the editorial cautions, "That makes fact-checking difficult, and puts ChatGPT (and its users) at risk of both plagiarizing the work of others and spreading misinformation."

A Chat with ChatGPT

Joshua Wilson, an associate professor of education at the University of Delaware, discusses the advantages and limitations of ChatGPT:

Writing Without Thinking?

It can churn out an essay on a designated topic, drawing on material it garners from the internet. A writer could treat this output as a a pre-first-draft that the human creator could then revise and elaborate. It's an "optimal synthesizer" but lacks "nuance and perspective." To forbid resorting to ChatGPT would be futile, he thinks; instead, we need to figure out the proper ways to use it. He sees it as a valid device to save time and effort, provided we regard its product as a "starting point and not a final destination."

David Brooks, a NEW YORK TIMES columnist, offers cautionary observations on art and prose generated by AI programs:

Major in Being Human

He distinguishes between tasks a computer program can competently perform and those that require "a humanistic core," such as "passion, pain, longings. . . imagination, bursts of insight, anxiety and joy." He advises the next generation to educate themselves for "skills that machines will not replicate," e.g., creativity, empathy, a "distinct personal voice," etc.

Some school systems have already banned ChatGPT in the classroom as a form of cheating. Moreover, AI programs exist with the function of detecting probable AI-generated prose. From what I've read about text-generating and art-producing programs, it seems to me that in principle they're tools like spellcheck and electronic calculators, even though much more complex. Surely they can be used for either fruitful or flawed purposes, depending on human input.

Margaret L. Carter

Carter's Crypt

Thursday, July 21, 2022

The Future of Elections

Earlier this week, we voted in the primary election in this state. Thinking about voting reminded me of a story I read many years ago (and don't remember its title or author). This speculative piece on how elections might work in the distant future proposed a unique procedure that could function only with a near-omniscient AI accumulating immense amounts of data.

After analyzing the demographics of the country in depth, the central computer picks a designated voter. This person, chosen as most effectively combining the typical characteristics of all citizens, votes in the national election on behalf of the entire population. The really unsettling twist in the tale is that the "voter" doesn't even literally vote. He (in the story, the chosen representative is a man) answers a battery of questions, if I recall the method correctly. The computer, having collated his responses, determines which candidates and positions he would support.

This method of settling political issues would certainly make things simpler. No more waiting days or potentially weeks for all the ballots to be counted. No contesting of results, since the single aggregate "vote" would settle everything on the spot with no appeal to the AI's decision.

The story's premise seems to have an insurmountable problem, however, regardless of the superhuman intelligence, vast factual knowledge, and fine discrimination of the computer. Given the manifold racial, political, economic, ethnic, and religious diversity of the American people, how could one "typical" citizen stand in for all? An attempt to combine everybody's traits would inevitably involve many direct, irreconcilable contradictions. The AI might be able to come up with one person who satisfactorily represents the majority. When that person's "vote" became official, though, the political rights of minorities (religious, racial, gender, or whatever) would be erased.

A benevolent dictatorship by an all-knowing, perfectly unbiased computer (if we could get around the GIGO principle of its reflecting the biases of its programmers) does sound temptingly efficient at first glance. But I've never read or viewed a story, beyond a speculative snippet such as the one described above, about such a society that ultimately turned out well. Whenever the Enterprise came across a computer-ruled world in the original STAR TREK, Kirk and Spock hastened to overthrow the AI "god" in the name of human free will.

Margaret L. Carter

Carter's Crypt

Thursday, August 26, 2021

Can AI Be a Bad Influence?

In a computer language-learning experiment in 2016, a chat program designed to mimic the conversational style of teenage girls devolved into spewing racist and misogynistic rhetoric. Interaction with humans quickly corrupted an innocent bot, but could AI corrupt us, too?

AI's Influence Can Make Humans Less Moral

Here's a more detailed explanation (from 2016) of the Tay program and what happened when it was let loose on social media:

Twitter Taught Microsoft's AI Chatbot to Be a Racist

The Tay Twitter bot was designed to get "smarter" in the course of chatting with more and more users, thereby, it was hoped, "learning to engage people through 'casual and playful conversation'." Unfortunately, spammers apparently flooded it with poisonous messages, which it proceeded to imitate and amplify. If Tay was ordered, "Repeat after me," it obeyed, enabling anyone to put words in its virtual mouth. However, it also started producing racist, misogynistic, and just plain weird utterances spontaneously. This debacle raises questions such as "how are we going to teach AI using public data without incorporating the worst traits of humanity?"

The L.A. TIMES article linked above, with reference to the Tay episode as a springboard for discussion, explores this problem in more general terms. How can machines "make humans themselves less ethical?" Among other possible influences, AI can offer bad advice, which people have been noticed to follow as readily as they do online advice from live human beings; AI advice can "provide a justification to break ethical rules"; AI can act as a negative role model; it can be easily used for deceptive purposes; outsourcing ethically fraught decisions to algorithms can be dangerous. The article concludes that "whenever AI systems take over a new social role, new risks for corrupting human behavior will emerge."

This issue reminds me of Isaac Asimov's Three Laws of Robotics, especially since I've recently been rereading some of his robot-related fiction and essays. As you'll recall, the First Law states, "A robot may not injure a human being or, through inaction, allow a human being to come to harm." In one of Asimov's early stories, a robot learns to lie in order to tell people what they want to hear. As this machine perceives the problem of truth and lies, the revelation of distressing truths would cause humans emotional pain, and emotional harm is still harm. Could AI programs be taught to avoid causing emotional and ethical damage to their human users? The potential catch is that a computer intelligence can acquire ethical standards only by having them programmed in by human designers. As a familiar precept declares, "Garbage in, garbage out." Suppose programmers train an AI to regard the spreading of bizarre conspiracy theories as a vital means of protecting the public from danger?

It's a puzzlement.

Margaret L. Carter

Carter's Crypt

Thursday, August 19, 2021

Mind-Reading Technology

Scientists from the University of California, San Francisco have developed a computer program to translate the brain waves of a 36-year-old paralyzed man into text:

Scientists Translate Brain Waves

They implanted an array of electrodes into the sensorimotor cortex of the subject's brain and "used 'deep-learning algorithms' to train computer models to recognize and classify words from patterns in the participant’s brain activity." The training process consisted of showing words on a screen and having the man think about saying them, going through the mental activity of trying to say the words, which he'd lost the physical ability to do. Once the algorithm had learned to match brain patterns to particular words, the subject could produce text by thinking of sentences that included words from the program's vocabulary. Using this technology, he could generate language at a rate of about fifteen words per minute (although not error-free) as opposed to only five words per minute while operating a computer typing program with movements of his head.

Training the program to this point wasn't easy, apparently. The course took 48 sessions over a period of 81 weeks. Still, it's the closest thing to "mind-reading" we have so far, a significant advance over techniques that let a patient control a prosthetic limb by thought alone. According to Dr. Lee H. Schwamm, an officer of the American Stroke Association, “This study represents a transformational breakthrough in the field of brain-computer interfaces."

Here's an article about an earlier experiment in which a paralyzed man learned to produce sentences with "a computer system that turns imagined handwriting into words" at a rate of 18 words per minute.

Mindwriting Brain Computer

The hardware consists of "small, implantable computer chips that read electrical activity straight from the brain." The subject imagined writing letters in longhand, mentally going through the motions. At the same time, the scientists "recorded activity from the brain region that would have controlled his movements." The collected recordings were used to train the AI to translate the man's "mindwriting" into words on a screen. Eventually the algorithm achieved a level of 94.1% accuracy—with the aid of autocorrect, 99%.

While those programs are far from literal telepathy, the ability to read any thoughts that rise to the surface of a subject's mind, they still constitute an amazing advance. As long as such technology requires hardware implanted in an individual's brain, however, we won't have to worry about our computer overlords randomly reading our minds.

Margaret L. Carter

Carter's Crypt

Thursday, June 24, 2021

Woebot

"Virtual help agents" have been developed to perform many support tasks such as counseling refugees and aiding people to access disability benefits. Now a software app named Woebot is claimed to perform actual talk therapy:

Chatbot Therapist

Created by a team at Stanford, "Woebot uses brief daily chat conversations, mood tracking, curated videos, and word games to help people manage mental health." For $39 per month, you can have Woebot check in with you once a day. It doesn't literally talk but communicates by Facebook Messenger. The chatbot mainly asks questions and works through a "decision tree" not unlike, in principle, a choose-your-own-adventure story. It follows the precepts of cognitive therapy, guiding patients to alter their own mental attitudes. Woebot is advertised as "a treatment in its own right," an accessible alternative for people who can't get conventional therapy for whatever reason. If the AI encounters someone in a mental-health crisis, "it suggests they seek help in the real world" and lists available resources. Text-based communication with one's "therapist" may sound less effective than oral conversation, yet in fact it was found that "the texting option actually reduced interpersonal anxiety."

It's possible that, within the limits of its abilities, this program may be better than a human therapist in that one respect. Many people open up more to a robot than to another person. Human communication may be hampered by the "fear of being judged." Alison Darcy, one of the creators of Woebot, remarks, "There’s nothing like venting to an anonymous algorithm to lift that fear of judgement." One of Woebot's forerunners in this field was a computer avatar "psychologist" called Ellie, developed at the University of Southern California. In a 2014 study of Ellie, "patients" turned out to be more inclined to speak freely if they thought they were talking to a bot rather than a live psychologist. Ellie has an advantage over Woebot in that she's programmed to read body language and tone of voice to "pick up signs of depression and post-traumatic stress disorder." Data gathered in these dialogues are sent to human clinicians. More on this virtual psychologist:

Ellie

Human beings often anthropomorphize inanimate objects. One comic strip in our daily paper regularly shows the characters interacting and arguing with an Alexa-type program like another person in the room and treating the robot vacuum as if it's at least as intelligent as a dog. So why not turn in times of emotional distress to a therapeutic AI? We can imagine a patient experiencing "transference" with Woebot—becoming emotionally involved with the AI in a one-way dependency of friendship or romantic attraction—a quasi-relationship that could make an interesting SF story.

Margaret L. Carter

Carter's Crypt

Thursday, November 12, 2020

More on AI

Cory Doctorow's latest LOCUS column continues his topic from last month, the sharp divide between the artificial intelligence of contemporary technology and the self-aware computers of science fiction. He elaborates on his arguments against the possibility of the former's evolving into the latter:

Past Performance

He explains current machine learning "as a statistical inference tool" that "analyzes training data to uncover correlations between different phenomena." That's how an e-mail program predicts what you're going to type next or a search engine guesses your question from the initial words. An example he analyzes in some detail is facial recognition. Because a computer doesn't "know" what a face is but only looks for programmed patterns, it may produce false positives such as "doorbell cameras that hallucinate faces in melting snow and page their owners to warn them about lurking strangers." AI programs work on a quantitative rather than qualitative level. As remarkably as they perform the functions for which they were designed, "statistical inference doesn’t lead to comprehension, even if it sometimes approximates it." Doctorow contrasts the results obtained by mathematical analysis of data with the synthesizing, theorizing, and understanding processes we think of as true intelligence. He concludes that "the idea that if we just get better at statistical inference, consciousness will fall out of it is wishful thinking. It’s a premise for an SF novel, not a plan for the future."

While I'd like to believe a sufficiently advanced supercomputer with more interconnections, "neurons," and assimilation of data than any human brain could hold might awaken to self-awareness, like Mike in Heinlein's THE MOON IS A HARSH MISTRESS, I must admit Doctorow's argument is highly persuasive. Still, people do anthropomorphize their technology, even naming their Roomba vacuum cleaners. (I haven't done that. Our Roomba is a low-end, fairly dumb model. Its intelligence is limited to changing direction when it bumps into obstacles and returning to its charger when low on power, which I never let it run long enough to do. But nevertheless I give the thing pointless verbal commands on occasion. It doesn't listen to me much less than the cats do, after all.) People carry on conversations with Alexa and Siri. I enjoy remembering a cartoon I saw somewhere of a driver simultaneously listening to the GPS apps on both the car's system and the cell phone. The two GPS voices are arguing with each other about which route to take.

Remember Eliza, the computer therapist program? She was invented in the 1960s, and supposedly some users mistook for a human psychologist. You can try her out here:

Eliza

As the page mentions, the dialogue goes best if you limit your remarks to talking about yourself. When I tried to engage her in conversation about the presidential election, her lines quickly devolved into, "Do you have any psychological problems?" (Apparently commenting that one loathes a certain politician is a red flag.) So these AI therapists don't really pass the Turing test. I've read that if you state to one of them, for instance, "Einstein says everything is relative," it will probably respond, "Tell me more about your family." Many years ago, when the two youngest of our sons were preteens, we acquired a similar program, very simple, which one communicated with by typing, and it would type a reply that the computer's speaker would also read out loud. The kids had endless fun writing sentences such as, "I want [long string of numbers] dollars," and listening to the computer voice retort with something like, "I am not here to fulfill your need for ten quintillion, four quadrillion, nine trillion, fifty billion, one hundred million, two thousand, one hundred and forty-one dollars."

Margaret L. Carter

Carter's Crypt

Thursday, September 10, 2020

More on Robots

If convenient, try to pick up a copy of the September 2020 NATIONAL GEOGRAPHIC, which should still be in stores at this time. The feature article, "Meet the Robots," goes into lengthy detail about a variety of different types of robots and their functions, strengths, and limitations. The cover shows a mechanical hand delicately holding a flower. The article on the magazine's website is behind a paywall, unfortunately.

Profusely illustrated, it includes photos of robots that range from human-like to vaguely humanoid to fully non-anthropomorphic. One resembles an ambulatory egg, another a mechanical octopus. As the text points out, form follows function. Some machines would gain nothing by being shaped like people, and for some tasks the human form would actually be more of a drawback than a benefit. Some of those devices perform narrowly defined, repetitive jobs such as factory assembly, while others more closely resemble what science-fiction fans think of as "robots"—quasi-intelligent, partly autonomous machines that can make decisions among alternatives. In many cases, they don't "steal jobs" but, rather, fill positions for which employers have trouble hiring enough live workers. Robots don't get sick or tired, don't suffer from boredom, and can spare human workers from exposure to hazards. On the other hand, the loss of some kinds of jobs to automation is a real problem, to which the article devotes balanced attention. Although an increasingly automated working environment may create new jobs in the long run, people can't be retrained for those hypothetical positions overnight.

Some robots carry their "brains" within their bodies, as organic creatures do, while others take remote direction from computers (Wi-Fi efficiency permitting—now there's an intriguing plot premise, a society dependent on robots controlled by a central hive-mind AI, which blackmailers or terrorists might threaten to disable). On the most lifelike end of the scale, an animated figure called Mindar, "a metal and silicone incarnation of Kannon," a deity in Japanese Buddhism, interacts with worshipers. Mindar contains no AI, but that feature may eventually be added. American company Abyss Creations makes life-size, realistic sex dolls able to converse with customers willing to pay extra for an AI similar to Alexa or Siri. Unfortunately for people envisioning truly autonomous robot lovers, from the neck down they're still just dolls.

We're cautioned against giving today's robots too much credit. They can't match us in some respects, such as the manipulative dexterity of human hands, bipedal walking, or plain "common sense." We need to approach them with "realistic expectations" rather than thinking they "are far more capable than they really are." Still, it seems wondrous to me that already robots can pick crops, milk cows, clean and disinfect rooms (I want one of those), excavate, load cargo, make deliveries in office buildings (even asking human colleagues to operate elevators for them), take inventory, guide patients through exercise routines, arrange flowers, and "help autistic children socialize." Considering that today's handheld phones are more intelligent than our first computer was (1982), imagine what lies ahead in the near future!

Margaret L. Carter

Carter's Crypt

Thursday, August 27, 2020

Robot Caretakers

Here's another article, long and detailed, about robot personal attendants for elderly people:

Meet Your Robot Caretaker

I was a little surprised that the first paragraph suggests those machines will be a common household convenience in "four or five decades." I'd have imagined their becoming a reality sooner, considering that robots able to perform some of the necessary tasks already exist. The article mentions several other countries besides Japan where such devices are now commercially available.

The article enumerates some of the potential advantages of robot health care aides: (1) There's no risk of personality conflicts, as may develop between even the most well-intentioned people. (2) Automatons don't need time off. (3) They don't get tired, confused, sick, or sloppy. (4) They can take the place of human workers in low-paid, often physically grueling jobs. (4) Automatons are far less likely to make mistakes, being "programmed to be consistent and reliable." (5) In case of error, they can correct the problem with no emotional upheaval to cloud their judgment or undermine the client-caretaker relationship. (6) The latter point relates to an actual advantage many prospective clients see in having nonhuman health aides; there's no worry about hurting a robot's feelings. (7) Likewise, having a machine instead of a live person to perform intimate physical care, such as bathing, would avoid embarrassment.

Contrary to hypothetical objections that health-care robots would deprive human aides of work, one expert suggests that "robots handling these tasks would free humans to do other, more important work, the kind only humans can do: 'How awesome would it be for the home healthcare nurse to play games, discuss TV shows, take them outside for fresh air, take them to get their hair done, instead of mundane tasks?'” Isolated old people need "human connection" that, so far, robots can't provide. The article does, however, go on to discuss future possibilities of emotional bonding with robots and speculates about the optimal appearances of robotic home health workers. A robot designed to take blood pressure, administer medication, etc. should have a shape that inspires confidence. On the other hand, it shouldn't look so human as to fall into the uncanny valley.

As far as "bonding" is concerned, the article points out that "for most people, connections to artificial intelligence or even mechanical objects can happen without even trying." The prospect of more lifelike robots and deeper bonding, however, raises another question: Would clients come to think of the automaton as so person-like that some of the robotic advantages listed above might be negated? I'm reminded of Ray Bradbury's classic story about a robot grandmother who wins the love of a family of motherless children, "I Sing the Body Electric"; one child fears losing the "grandmother" in death, like her biological mother.

Margaret L. Carter

Carter's Crypt

Thursday, July 23, 2020

Digisexuals

In 2018, Akihiko Kondo, a Japanese school administrator, married a hologram of a "cyber celebrity," Hatsune Miku, an animated character with no physical existence. She dwells in a Gatebox, "which looks like a cross between a coffee maker and a bell jar, with a flickering, holographic Miku floating inside." She can carry on simple conversations and do tasks such as switching lights on and off (like Alexa, I suppose). Although the marriage has no legal status, Kondo declares himself happy with his choice:

Rise of Digisexuals

According to a different article, Miku originated as "computer-generated singing software with the persona of a big-eyed, 16-year-old pop star with long, aqua-colored hair." Gatebox's offer of marriage registration forms for weddings between human customers and virtual characters has been taken up by at least 3,700 people in Japan (as of 2018). People who choose romance with virtual persons are known as "digisexuals." The CNN article linked above notes, "Digital interactions are increasingly replacing face-to-face human connections worldwide."

Of course, "digital interactions" online with real people on the other end are different from making emotional connections with computer personas. The article mentions several related phenomena, such as the robotic personal assistants for the elderly becoming popular in Japan. Also, people relate to devices such as Siri and Alexa as if they were human and treat robot vacuums like pets. I'm reminded of a cartoon I once saw in which a driver of a car listens to the vehicle's GPS arguing with his cell phone's GPS about which route to take. Many years ago, I read a funny story about a military supercomputer that transfers "her" consciousness into a rocket ship in order to elope with her Soviet counterpart. The CNN article compares those anthropomorphizing treatments of electronic devices to the myth of Pygmalion, the sculptor who constructed his perfect woman out of marble and married her after the goddess Aphrodite brought her to life. As Kondo is quoted as saying about holographic Miku's affectionate dialogue, "I knew she was programmed to say that, but I was still really happy." Still, the fact that he "completely controls the romantic narrative" makes the relationship radically different from human-to-human love.

Falling in love with a virtual persona presents a fundamental dilemma. As long as the object of affection remains simply a program designed to produce a menu of responses, however sophisticated, the relationship remains a pleasant illusion. If, however, the AI becomes conscious, developing selfhood and emotions, it can't be counted on to react entirely as a fantasy lover would. An attempt to force a self-aware artificial person to keep behaving exactly the way the human lover wishes would verge on erotic slavery. You can have either an ideal, wish-fulfilling romantic partner or a sentient, voluntarily responsive one, not both in the same person.

Margaret L. Carter

Carter's Crypt

Thursday, July 16, 2020

AI and Human Workers

Cory Doctorow's latest LOCUS essay explains why he's an "AI skeptic":

Full Employment

He believes it highly unlikely that anytime in the near future we'll create "general AI," as opposed to present-day specialized "machine learning" programs. What, no all-purpose companion robots? No friendly, sentient supercomputers such as Mike in Heinlein's THE MOON IS A HARSH MISTRESS and Minerva in his TIME ENOUGH FOR LOVE? Not even the brain of the starship Enterprise?

Doctorow also professes himself an "automation-employment-crisis skeptic." Even if we achieved a breakthrough in AI and robotics tomorrow, he declares, human labor would be needed for centuries to come. Each job rendered obsolete by automation would be replaced by multiple new jobs. He cites the demands of climate change as a major driver of employment creation. He doesn't, however, address the problem of retraining those millions of workers whose jobs become superseded by technological and industrial change.

The essay broadens its scope to wider economic issues, such as the nature of real wealth and the long-term unemployment crisis likely to result from the pandemic. Doctorow advances the provocative thesis, "Governments will get to choose between unemployment or government job creation." He concludes with a striking image:

"Keynes once proposed that we could jump-start an economy by paying half the unemployed people to dig holes and the other half to fill them in. No one’s really tried that experiment, but we did just spend 150 years subsidizing our ancestors to dig hydrocarbons out of the ground. Now we’ll spend 200-300 years subsidizing our descendants to put them back in there."

Speaking of skepticism, I have doubts about the premise that begins the article:

"I don’t see any path from continuous improvements to the (admittedly impressive) 'machine learning' field that leads to a general AI any more than I can see a path from continuous improvements in horse-breeding that leads to an internal combustion engine."

That analogy doesn't seem quite valid to me. An organic process (horse-breeding), of course, doesn't evolve naturally into a technological breakthrough. Development from one kind of inorganic intelligence to a higher level of similar, although more complex, intelligence is a different kind of process. Not that I know enough of the relevant science to argue for the possibilities of general AI. But considering present-day abilities of our car's GPS and the Roomba's tiny brain, both of them smarter than our first desktop computer only about thirty years ago, who knows what wonders might unfold in the next fifty to a hundred years?

Margaret L. Carter

Carter's Crypt

Sunday, February 09, 2020

Nuts

If you wish to read about testicles, this is not the venue. At least, not this day. Nor do I intend to discuss a staple of the vegan diet.

This is about copyright-related news that does not make sense.

Yesterday, on a very prestigious forum for authors, in a thread about ebook piracy, one correspondent opined, "It's just downloading..."

In fact, it is the downloading that creates multiple, perfect, illegal copies.

Meanwhile, on one of the most-watched financial channels, a panel was discussing Artificial Intelligence, and the scraping of social media sites for privately-taken and also commercially-taken photographs for commercial exploitation and facial recognition technology.

The one aspect that the anchor and panelists never mentioned at all was the massive copyright infringement.
Anyone who takes a photograph owns the copyright to that photograph. If you post a selfie, you do not automatically grant Clearview AI or anyone else a license to sell your face to the fuzz.

Sputnik news has the scoop:
https://sputniknews.com/science/202002061078248616-facebook-demands-facial-recognition-startup-stop-scraping-images-from-platform-/

Even that very informative article glosses over a very important term: "publicly available".
https://www.lawinsider.com/dictionary/publicly-available

There is a difference between something being available to view, and available to copy and re-publish and distribute.

Another nutty misunderstanding that is prevalent among pirates is of "public domain".
https://legal-dictionary.thefreedictionary.com/Public-domain

Just because someone uploaded an illegal copy of a novel to a website does not mean that that novel is lawfully in the public domain.  Not if the author is still alive, or deceased within the last 70 years.

Likewise, those who are curious about their ancestors and long lost relatives do not necessarily intend to donate to a government DNA database. If Heritage/Ancestry/ 23&Me keeps pestering you to give permission for your DNA to be used for "research", do not agree. They've probably already sold your DNA is a job lot and are trying to clean up their bases.

If you gave a spit, you'd better keep a diary, and have an alibi for every hour of every day and night!

Allegedly, Amazon is getting in on the use of  faked or fake people to avoid having to pay royalties to real people.  If one is famous --or merely attractive and popular-- and they have multiple views of your face and tracks of your voice, there's no limit to the liberties "they" can take.

Chris Castle writes:
https://musictechpolicy.com/2020/02/07/the-singularity-is-nigh-amazon-fake-brand-personality-follows-chinas-fake-news-presenter-with-us-right-of-publicity-infringement/

Also Amazon-related, there was one rare victory this last week against the inexorable incursions of Amazon and AI on authors' rights was that of the Association of American Publishers against Audible Captions.
https://publishingperspectives.com/2020/02/copyright-coup-as-association-american-publishers-succeeds-in-audible-captions-case/

Copyrighting anything including one's photographs is not as expensive as one might imagine. Wiki How explains the steps:
https://www.wikihow.com/Copyright-Photographs

Copyright.gov has the fee schedule in effect from 2014, (and one can copyright a batch of photographs for one fee.)
https://www.copyright.gov/about/fees.html

Act quickly. Copyright registration costs are likely to rise by more than 20% this coming Spring 2020. Except for batches of photographs. No increase is proposed for that.
https://www.copyright.gov/rulemaking/feestudy2018/proposed-fee-schedule.pdf

Finally, the Copyright Alliance.org is asking (again) for action to encourage Oregon Senator #JustOne Ron Wyden to stop his opposition to anything that might improve copyright protections for authors, musicians and other creators.

https://copyrightalliance.org/ca_post/why-is-senator-wyden-the-only-obstacle-standing-between-americas-creators-and-justice/?_zs=TqSBb&_zl=bOTw1

One of his felon-friendly* rationales for blocking the #CASEAct is that mere downloaders ought not to face any disincentive for "stealing" or "sharing" copyrighted content that the creators rely on to pay their bills #MySkillsPayBills.

Apparently, @RonWyden would also like to change Fair Use from a defense for defendants to a negative proposition --i.e. that the infringement was not fair use-- to be proven by the plaintiffs.

That's just nuts!

All the best,

Rowena Cherry 
SPACE SNARK™ http://www.spacesnark.com/ 

*PS....copyright infringement is not a felony.