Showing posts with label chatbots. Show all posts
Showing posts with label chatbots. Show all posts

Thursday, September 14, 2023

AI Compositions and Their Influence on Letters as Signals

In Cory Doctorow's latest column, he brings up a potential unintended byproduct of overusing "large language models," aka chatbots such as ChatGPT:

Plausible Sentence Generators

He recalls a recent incident when he wrote a letter of complaint to an airline, threatening to sue them in small claims court, and fed it to such a program for rewriting. He was surprised at the high quality of the result. The site changed his pretty good "legal threat letter" into a noticeably stronger "vicious lawyer letter."

Letters of that type, as well as another of his examples, letters of recommendation from college professors, are performative. They transmit not only information but "signals," as Doctorow puts it. A stern letter from a lawyer sends the message that somebody cares enough about an issue to spend a considerable amount of money hiring a professional to write the letter. A recommendation from a professor signals that the he or she considers the student worthy of the time required to write the recommendation.

One of Spider Robinson's Callahan's Bar stories mentions a similar performative function that shows up in an oral rather than written format, spousal arguments. The winner of the argument is likely to be the one who dramatizes his or her emotional investment in the issue with more demonstrative passion than the other spouse.

In the case of written performances, Doctorow speculates on what will happen if AI-composed (or augmented) epistles become common. When it becomes generally known that it's easy and inexpensive or free to write a letter of complaint or threat, such messages won't signal the serious commitment they traditionally do. Therefore, they'll become devalued and probably won't have the intended impact. The messages (like form letters, though Doctorow doesn't specifically mention those) will lack "the signal that this letter was costly to produce, and therefore worthy of taking into consideration merely on that basis."

I'm reminded of the sample letters to congresscritters included in issues of the MILITARY OFFICER magazine whenever Congress is considering legislation that will have serious impact on members of the armed services and their families. These form letters are meant to be torn out of the magazine, signed, and mailed by subscribers to the presiding officers of the House and Senate. But, as obvious form letters, they clearly don't take much more effort than e-mails -- some, because envelopes must be addressed and stamps affixed, but not much more. So how much effect on a legislator's decision can they have?

Miss Manners distinctly prefers old-fashioned, handwritten thank-you notes over e-mailed thanks because the former show that the recipient went to a certain amount of effort. I confess I do send thank-you notes by e-mail whenever possible. The acknowledgment reaches the giver immediately instead of at whatever later time I work up the energy to getting around to it. So, mea culpa, I plead guilty! However, the senders of the gifts themselves have almost completely stopped writing snail-mail letters, so in communication with them, e-mail doesn't look lazy (I hope), just routine. Context is key.

Margaret L. Carter

Please explore love among the monsters at Carter's Crypt.

Thursday, April 13, 2023

How Will AI Transform Childhood?

According to columnist Tyler Cowen, "In the future, middle-class kids will learn from, play with and grow attached to their own personalized AI chatbots."

I read this essay in our local newspaper a couple of weeks ago. Unfortunately, I wasn't able to find the article on a site that didn't require registering for an account to read it. The essence of its claim is that "personalized AI chatbots" will someday, at a not too far distant time, become as ubiquitous as pets, with the advantage that they won't bite. Parents will be able to control access to content (until the kid learns to "break" the constraints or simply borrows a friend's less restricted device) and switch off the tablet-like handheld computers remotely. Children, Cowen predicts, will love these; they'll play the role of an ever-present imaginary friend that one can really interact with and get a response.

He envisions their being used for game play, virtual companionship, and private AI tutoring (e.g., learning foreign languages much cheaper than from classes or individual tutors) among other applications. I'm sure our own kids would have loved a device like this, if it had been available in their childhood. I probably would have, too, back when dinosaurs roamed the Earth and similar inventions were the wild-eyed, futuristic dreams of science fiction. If "parents are okay with it" (as he concedes at one point), the customized AI companion could be a great boon—with appropriate boundaries and precautions. For instance, what about the risks of hacking?

One thing that worries me, however, isn't even mentioned in the article (if I remember correctly from the paper copy I neglected to keep): The casual reference to "middle-class kids." The "digital divide" has already become a thing. Imagine the hardships imposed on students from low-income families, who couldn't afford home computers, by the remote learning requirements of the peak pandemic year. What will happen when an unexamined assumption develops that every child will have a personal chatbot device, just as many people and organizations, especially businesses and government offices, now seem to assume everybody has a computer and/or a smart phone? (It exasperates me when websites want to confirm my existence by sending me texts; I don't own a smart phone, don't text, and don't plan to start.) Not everybody does, including some who could easily afford them, such as my aunt, who's in her nineties. Those assumptions create a disadvantaged underclass, which could only become more marginalized and excluded in the case of children who don't belong to the cohort of "middle-class kids" apparently regarded as the norm. Will school districts provide free chatbot tablets for pupils whose families fall below a specified income level? With a guarantee of free replacement if the thing gets broken, lost, or stolen?

In other AI news, a Maryland author has self-published a horror book for children, SHADOWMAN, with assistance from the Midjourney image-generating software to create the illustrations:

Shadowman

In an interview quoted in a front-page article of the April 12,2023, Baltimore Sun, she explains that she used the program to produce art inspired by and in the style of Edward Gorey. As she puts it, "I created the illustrations, but I did not hand draw them." She's perfectly transparent about the way the images were created, and the pictures don't imitate any actual drawings by Gorey. The content of each illustration came from her. "One thing that's incredible about AI art," she says, "is that if you have a vision for what you're wanting to make it can go from your mind to being." And, as far as I know, imitating someone else's visual or verbal style isn't illegal or unethical; it's one way novice creators learn their craft. And yet. . . might this sort of thing, using software "trained" on the output of one particular creator, skate closer to plagiarism than some other uses of AI-generated prose and art?

Another AI story in recent news: Digidog, a robot police K-9 informally known as Spot, is being returned to active duty by the NYPD. The robot dog was introduced previously but shelved because some people considered it "creepy":

Robot Dog

Margaret L. Carter

Carter's Crypt