Showing posts with label Skeptical Inquirer. Show all posts
Showing posts with label Skeptical Inquirer. Show all posts

Thursday, June 20, 2024

Do Spoilers Really Spoil?

The latest issue of SKEPTICAL INQUIRER includes an article exploring whether advance exposure to spoilers actually makes the experience of reading a book or viewing a movie (the author mainly discusses films) worse, neutral, or better:

Savoring Uncertainty

The author, Stuart Vyse, starts by analyzing the difference between stories that provide a "clear resolution" and those that end with ambiguities unresolved. He notes, "Given the chaos of everyday life, it’s understandable that people are drawn to stories that make sense and provide closure." He links this tendency to a wish to believe we live in a just universe, offering the TV series LAW AND ORDER as a typical example. There I think he's absolutely right. The traditional detective novel is the most moral of genres. It promises that problems will be solved, questions answered, justice served, and criminals punished. In rare cases when the criminal escapes the grasp of the law, it's because the detective has determined his or her crime was justified. Vyse contrasts the traditional formula with the "noir" subgenre, in which ambiguity reigns, morality comes in shades of gray, and justice is far from guaranteed.

He then discusses the connection, if any, between enjoyment of ambiguity and tolerance of spoilers. He also goes into the definition of a spoiler, which can vary according to the individual experiencing it -- e.g., someone who's naive about the particular genre, such as a small child -- and to what extent the information constitutes "common knowledge." We'd all probably agree that the prohibition on spoilers has run out for mentioning that Romeo and Juliet die at the end of the play, for example. For a century or more, certainly since the first movie adaptations came out, everybody has known Dr. Jekyll and Mr. Hyde inhabit the same body. The phrase "Jekyll and Hyde" has become proverbial. When the novella was first published, however, that secret came as a shocking revelation near the end. Upon the original publication of DRACULA, readers who ignored reviews could have picked up the novel without suspecting the Count's true nature. Nowadays, even elementary-school kids know "Dracula" equals "vampire."

Vyse cites research on whether spoilers decrease appreciation for a work, increase it, or have no effect. Results of various studies yield different answers. I've noticed tolerance for spoilers ranges from the zero-tolerance of fans such as one of our children, who avoids even book cover blurbs if possible, to my own attitude, sympathetic to a comment I read somewhere that a story capable of being "spoiled" by knowledge of what happens isn't worth spoiling. I admit exceptions of course, such as knowing the killer before the big reveal in a murder mystery (on first reading, at least) or works in which the climactic twist is the whole point of the thing, such as THE SIXTH SENSE. I don't at all mind knowing in advance whether a major character will live or die; in fact, I sometimes sneak a peak at the end to relieve the stress of wondering. When the series finale of FOREVER KNIGHT aired, I was glad I'd read a summary before viewing the episode. When I actually saw the devastating final scene, having braced myself for the worst allowed me to feel it wasn't quite so bad as other fans had maintained. Having reread many of my favorite books over and over demonstrates that foreknowledge of the plot doesn't bother me. With that knowledge, I can relax into the pleasure of revisiting familiar characters.

In one of C. S. Lewis's works of literary criticism, he declares that the point of a startling twist in a book or any artistic medium isn't the surprise in itself. It's "a certain surprisingness." During subsequent exposures to the work, we have the fun of anticipating the upcoming surprise and enjoying how the creator prepares us for it. In a second or later reading of a mystery, for example, we can notice the clues the author has hidden in plain sight. We realize how we should have guessed the murderer and admire the author's skill at concealing the solution to while still playing fair with the reader. (Along that line, I was astonished to hear Nora Roberts remark at a convention that she doesn't plan her "In Death" novels written under the name "J. D. Robb" in advance. How can anyone compose a detective story without detailed plotting? She must have to do an awful lot of cleanup in revision.)

Learning the general plot of a novel or film prior to reading or viewing doesn't "spoil" it for me. I read or watch for the experience of sharing the characters' problems, dangers, and joys, discovering how they navigate the challenges of the story, and getting immersed in their emotional and interpersonal growth. Once the "narrative lust" (another phrase from Lewis, referring to the drive to rush through the narrative to find out what happens next) has been satisfied by the first reading or viewing, in future ones we can take a while to savor all the satisfying details we didn't fully appreciate the first time.

Margaret L. Carter

Please explore love among the monsters at Carter's Crypt

Thursday, April 25, 2024

Pros and Cons of AI for Authors

Is AI good or bad for authors? AI (artificial intelligence) is such a broad term, and the technology included under its umbrella -- from little more than an enhanced variety of autocomplete to programs that almost appear to "think" -- is so diverse, that this question seems impossible to answer with a simple positive or negative. In this WRITER'S DIGEST article, Mike Trigg covers the most problematic and often discussed downsides, such as unauthorized use of written works for training generative AI, appropriation of copyrighted content without permission or payment, and the perceived market threat of AI-produced books. What he believes we should worry about most, however, is "discovery bias":

The Worst Is Yet to Come

How do potential audiences find creators' works? Through one form or another of advertising, changing as communication technologies advance. "AI will fundamentally change how we discover content," Trigg warns. Herein, he maintains, lies the greatest threat to authors. "In a future of AI-curated content, whose content do you think will be discoverable? Short answer: Whoever pays for that privilege." In this near-future scenario, "Rather than placing ads adjacent to Google search results or embedded in an Instagram feed, AI can just tell the user what to read, what to buy, what to do, without the pesky inconvenience of autonomous thought." Resulting feedback loops will lead to product recommendations, in books as in other commodities, that guide readers to content more and more similar to what they've purchased in the past. Niche markets will become progressively niche-er. "Discovery Bias will further concentrate the publishing industry into fewer and fewer bestselling authors -- the ones with the name recognition, publicity teams, and promotional budgets to generate a self-perpetuating consumption loop."

I'm not totally convinced the benefits will be restricted to bestselling authors. Mightn't lesser-known authors "similar" to the bestsellers in their subgenre also get a boost from the discovery process? But I can't deny the plausiblity of Trigg's warning.

His final paragraph offers hope, though. The unique gift of human authors, "crafting stories that are original, emotional, and compelling. . . .is still something that no technology can replicate."

Note the potential implications of "still," however.

For more on the pros and cons of cutting-edge artificial intelligence, you might want to get the AI-themed May/June 2024 issue of SKEPTICAL INQUIRER.

Margaret L. Carter

Please explore love among the monsters at Carter's Crypt.

Thursday, March 03, 2022

Ranking Dangers

The March-April issue of SKEPTICAL INQUIRER contains an article titled "The World's Most Deadly Animal," by Harriet Hall. The various candidates for this honor are ranked by the number of human beings they kill annually. A character in Robert Heinlein's TUNNEL IN THE SKY declares humans to be the deadliest animals. The villain in Richard Connell's classic short story "The Most Dangerous Game," who hunts his captives like wild beasts, would agree. Hall's list of the top ten most dangerous animals comes from this source:

Science Focus

Sharks and black widow spiders, which many people might think of when "deadly creatures" come to mind, don't even make it into the top ten. Lions do, barely, at the bottom. Hippos, elephants, and crocodiles beat them. The human animal (counting only homicides) rates second rather than first. The deadliest animal on Earth as quantified by people killed every year? The mosquito.

As Hall points out at the beginning of her article, our tendency to overlook mosquitoes and another high-ranking insect, the assassin bug, highlights the "availability heuristic." Facts and incidents that stick in our minds because of their sensational content tend to be perceived as more common than they actually are. There's a widespread attitude of, "Why is it getting so hot, and how did we get into this handbasket?" when in fact teenage pregnancy, adolescent illicit drug use, violent crime, drunk driving fatalities, and the worldwide number of deaths in battle have all decreased over the past few decades. We sometimes forget that frightening incidents and trends make headlines BECAUSE they're unusual, not commonplace. The occasional shark attack draws much more attention than thousands of malaria-causing mosquito bites.

Steven Pinker, in the section on phobias in his book HOW THE MIND WORKS, postulates that adult phobias are instinctive childhood fears that haven't been outgrown. These universal fears, which fall into certain well-defined categories, reflect the threats most hazardous to our "evolutionary ancestors"—snakes, spiders, heights, storms, large carnivores, darkness, blood, strangers, confinement, social scrutiny, and leaving home alone. In modern cities, the brain's fear circuitry often fails to function for optimal protection. "We ought to be afraid of guns, driving fast, driving without a seatbelt. . . not of snakes and spiders." Children have no innate aversion to playing with matches or chasing balls into traffic; instead, a survey of Chicago school kids revealed their greatest fears to be of lions, tigers, and snakes. Many writers of horror fiction draw upon intuitive awareness of our hard-wired terrors. The cosmic entity in Stephen King's IT targets children because, while adults obsess over mundane hazards such as heart attacks and financial ruin, children's fears run deeper and purer.

Margaret L. Carter

Carter's Crypt

Thursday, January 27, 2022

Creative Fakelore for Fun and Enlightenment

The January-February 2022 issue of SKEPTICAL INQUIRER includes an article by statistical ecologist Charles G. M. Paxton, narrating his experiment of creating an imaginary water monster to masquerade as an authentic legend. He was inspired by an account of an eighteenth-century ghost in London that turned out to be a hoax promulgated in the 1970s. Paxton wondered whether his lake monster could gain similar credence. One intriguing thing about this experiement, to me, is that not only did his invented sightings get retold as genuine by multiple sources, new reports of alleged historical sightings sprang up, independent of any effort on his part.

He decided to create, not a generic sea serpent like Nessie in Loch Ness, but a "monstrous aquatic humanoid." He located it in two freshwater lakes in England's Lake District that, as far as he knew, had no existing tradition of monster lore. Paxton named this creature Eachy and devised a false etymology for the word. He also invented a nonexistent book to cite as a source. After he had an article about Eachy uploaded to Wikipedia, references to the monster began to spread. Although the Wikipedia article on Eachy no longer exists, the Cryptid Wiki has a straightforward page on him or it as a real piece of folklore:

Eachy

The Cryptid Wiki piece mentions the earliest reported appearance of Eachy having occurred in 1873, an imaginary "fact" taken directly from Paxton's material. Moreover, in 2007 the monster sneaked into an actual nonfiction book, a cryptozoology guide by Ronan Coghlan. By January of 2008, Eachy T-shirts were being sold on the internet by someone unconnected to Paxton. At the time the Wikipedia Eachy page was deleted in 2019, it held the status of second-longest surviving hoax on that site.

What do we learn from this story? Paxton proposes that "the tale of the Eachy tells us the dangers of how Wikipedia can be subject to manipulation." As he mentions, however, in more recent years Wikipedia has tightened its standards and introduced more safeguards. On a broader scale, the Eachy hoax demonstrates the danger of how easily recorded history can be distorted or even fabricated from nothing, then accepted as fact. An important caution I'd note, as Paxton also alludes to, is the hazard of uncritically believing what appear to be multiple sources when in truth they're bouncing the same "facts" around in a self-referential echo chamber, repeating what they've picked up from previous sources in endless circularity. That phenomenon can be seen in a field I'm somewhat familiar with, scholarship on Bram Stoker's DRACULA. For instance, after an early biography suggested that Stoker might have died from complications of syphilis, numerous authors since then (in both nonfiction and novels) have accepted without question the truth of the assumption, "Bram Stoker had syphilis, which influenced the writing of DRACULA." The tale of Eachy also reinforces the obvious warning not to believe everything you read on the internet or even in books.

It's fascinating to me that a legend can be invented, disseminated, and perceived as authentic so quickly. Some authorities believe the story of Sawney Bean, the alleged patriarch of a sixteenth-century Scottish cannibal family, first reported in the NEWGATE CALENDAR centuries after the supposed events and repeated as fact in numerous publications since, was just such a fictional legend. And Sawney Bean's tale became deeply rooted in the public imagination long before the internet. In our contemporary electronic age, the chilling scenario in Orwell's NINETEEN EIGHTY-FOUR comes to mind. If history is whatever is written, what happens when history becomes so easy to rewrite? That's one good reason why, even if it ever became possible to digitize and make available on the web every book in existence, we should still hang onto the physical books. Ink on paper can't be altered at whim like bytes in an electronic file.

Margaret L. Carter

Carter's Crypt

Thursday, November 05, 2020

The Tyranny of Now

The November/December issue of SKEPTICAL INQUIRER contains an article by psychologist Stuart Vyse titled "COVID-19 and the Tyranny of Now." The phrase refers to our tendency to choose immediate rewards over potential future benefits. Our instincts drive us in that direction, since we evolved in environments where basing choices on short-term results made sense. There was little point in worrying about one's health in old age when one might get eaten by a saber-toothed tiger long before reaching that stage of life. Vyse's article summarizes this tendency as, "Smaller rewards in the present are chosen over larger ones in the future." Understandably, our first impulse is to go for the immediate, visible reward instead of the hypothetical future one that may or may not become reality. That's why people living in high-risk situations tend to heavily discount the future; if a young man in a dangerous neighborhood frequently sees friends and neighbors getting shot, the wisdom of long-term planning may not seem obvious to him. In the context of his physical and social enviroment, that choice makes sense.

Vyse reflects on climate change and the COVID-19 pandemic as two current high-profile examples. We have immediate experience of the inconveniences and hardships of changing our lifestyles to minimize the effects of those two phenomena. The potential rewards of self-denial, on the other hand—a return to being able to lead "normal" lives without catching the disease, a cleaner and more stable environment—exist in a future we have to take on faith. In connection with the pandemic, the fact that any effect of precautions or lack thereof shows up weeks (at least) after we change our actions makes it harder for us to judge the value of restricting our behavior. Another factor is that a drop in cases as a result of lockdowns can lead to the tempting but irrational response, "What we've been doing has worked, so now we can stop doing it" (my summary of Vyse's analysis). In short, delays are difficult. We have to make a deliberate, analytical effort to resist immediate impulses and embrace long-term gain. As Vyse quotes from an anonymous source, "If the hangover came first, nobody would drink."

Here's an article explaining this phenomenon in terms of a struggle between the logical and emotional parts of the brain:

Why Your Brain Prioritizes Instant Gratification

"The researchers concluded that impulsive choices happen when the emotional part of our brains triumphs over the logical one." The dopamine surge can be hard for the rational brain to resist. The article explores some methods for training oneself to forgo immediate pleasures in favor of later, larger gains, such as managing one's environment to avoid temptation.

This Wikipedia article goes into great detail about the neurological, cognitive, and psychological aspects of delayed gratification:

Delayed Gratification

It devotes a section to the famous Stanford marshmallow experiments of the 1960s and 70s, in which preschoolers were promised two marshmallows if they could resist eating a single marshmallow for a certain time span. Children who succeeded devised strategies to distract themselves or to imagine the tempting treat as something less appetizing. Interestingly, this article reports that, according to some studies, 10% more women than men have the capacity to delay gratification. It also mentions that the ability to exercise that kind of self-control may weaken in old age. "Declines in self-regulation and impulse control in old age predict corresponding declines in reward-delaying strategies...."

It's easy to think of a different reason why some elderly people may abandon the "rational" course of postponing rewards. The choice not to delay gratification may result from a perfectly sensible cost-benefit calculation, rather than surrender to the "emotional brain." In the absence of a diagnosed medical condition that poses an immediate, specific danger, if you're over 90 do you really care whether too much ice cream might make you gain weight or too much steak increase your cholesterol?

Margaret L. Carter

Carter's Crypt

Thursday, September 24, 2020

Self-Aware Cells?

The September-October issue of SKEPTICAL INQUIRER reviewed a book called THE FIRST MINDS: CATERPILLARS, 'KARYOTES, AND CONSCIOUSNESS, by Arthur S. Reber. Although I haven't read the book, only the long, detailed review essay, it sounds intriguing. Reber addresses the "problem of consciousness"—how did it originate from non-sentient matter? how is this seemingly immaterial phenomenon related to the material body?—from the simplest organisms up. He proposes that even single-celled organisms have agency, subjectivity, and sentience. He maintains that from the beginning of evolution, even the most "primitive" life-form must have been "sensitive to its immediate surroundings and to its own internal states." The review paraphrases his view as asserting that "to understand consciousness we must look first at the single-celled organism rather than. . . the human brain."

But sentience is customarily distinguished from "perception and thought." Does Reber claim that a one-celled life-form is self-aware, our usual meaning of "conscious"? The reviewer asks, "Is Reber really asserting that a unicellular organism has a mind?" Apparently so. Reber also seems to assume that amoebae can feel pain. What about plants? Reber remains agnostic on this question, pointing out that sentience wouldn't bestow any clear evolutionary advantage on a creature that can't move. As time-lapse nature photography demonstrates, though, many plants do move, just too slowly for us to notice in real time. (Some of the "weed" bushes in our yard, I think, do grow almost fast enough to be observed by the naked eye.)

Going even further, he suggests that the individual cells in our bodies are not simply alive but sentient. The reviewer asks, "Do we harbor an entire universe of minds?" Reber would answer in the affirmative, again apparently defining "mind" and "consciousness" very broadly. This concept reminds me of Madeleine L'Engle's A WIND IN THE DOOR, in which the characters become infinitestimally small to enter the body of Meg's critically ill little brother, Charles Wallace. They meet a community of farandolae, sub-microscopic creatures dwelling inside the mitochondria within one of Charles Wallace's cells. To a farandola, cells are worlds, and Charles Wallace's body is a galaxy. Much more recently, an ongoing manga series currently in print, CELLS AT WORK, portrays the internal organs and processes of the human body from the viewpoints of blood cells and other cells, each an individual character. Presently, the protagonist red and white cells have been involuntarily moved, by transfusion, from their original body to a new one. They have to cope with new (and worse) working conditions and learn to cooperate with the body's veteran cells. This is a fun, fascinating series, conveying biological facts in an informative and entertaining way, as accurately as possible considering the premise of humanoid, intelligent cells, who seem to survive a lot longer than white and red blood cells actually live.

The reviewer in SKEPTICAL INQUIRER discusses the obvious problem with Reber's hypothesis, that a blood cell or an amoeba is obviously not a human brain, and the emergence of the more complex structures and functions can't be equated with or explained by their simpler predecessors. According to the article, Reber doesn't manage to solve the problem of mind, which has baffled philosophers and scientists for millennia, but the reviewer still recommends his book as "a worthwhile contribution to the literature on consciousness."

The notion of conscious or even sentient cells is intriguing to contemplate but, if accepted with full seriousness, would paralyze our ability to carry on with daily life. Could we kill disease germs or even surgically excise cancers? If a tumor were self-aware, would it consider its right to life preempted by ours? The mind boggles.

Margaret L. Carter

Carter's Crypt

Thursday, November 01, 2018

Reflections on Alien Visitors

The November-December issue of SKEPTICAL INQUIRER contains three articles about UFOs and extraterrestrials.

"UFO Identification Process," by Joe Nickell and James McGaha, offers an overview of the many different phenomena that can be mistaken for alien spaceships. The authors provide a list of common "UFOs" with their most likely explanations, broken down into multiple categories with several items under each. For instance, they cite five different classifications, with examples, under "Daylight Objects/Lights" and five under "Nocturnal Lights/Objects." It's interesting to discover how many common objects and events can fool the untrained observer and even some trained observers such as pilots. This kind of material could enhance the realism of a story about a UFO sighting. If a character rules out all the typical sources of mistaken identification, his or her conclusion that an actual spaceship has appeared will seem more credible.

Eric Wojciechowski, in "UFOs: Humanoid Aliens? Why So Varied?", advances the position that the widely varied descriptions of alleged alien visitors, diverse in appearance yet strangely all anthropomorphic, make a "psychological explanation" for the reported contacts more likely than "an alien intelligence interacting with human beings." Where the previous article evaluates sightings of apparent flying objects, this one deals with "close encounters" reported by people who claim to have actually seen extraterrestrials. The author maintains that the odds are overwhelmingly against the probability that diverse intelligent species have visited Earth, that almost all of them happen to be humanoid, and that they've managed to remain hidden from mainstream attention yet have revealed themselves to random individuals. He places heavy emphasis on the "anthropomorphic yet varied" factor. Although I don't believe the alleged alien encounters actually happened (not that I've made a formal study of the topic, but those I've read about look like attempts at writing science fiction by people who know very little about SF), I don't find this author's arguments totally convincing. Diversity rather than uniformity could just as well be offered as an argument FOR the truth of the reports, suggesting that they're not merely imitations of other witnesses' accounts. Also, I can easily think of explanations for the phenomena he considers unlikely. An interstellar organization composed of multiple species from various planets might be observing us, for instance, and the reason we meet only humanoids is that humanoid species are assigned to observe worlds inhabited by races similar to themselves. The reason they're often glimpsed, yet no solid proof of their presence has turned up, might be that they want to observe us without interfering but don't mind being noticed, like Jane Goodall with the chimpanzees.

Biologist David Zeigler's ingenious article, "Those Supposed Aliens Might Be Worms," speculates on what life-forms might turn out to be most common on other planets and answers (you guessed it) "worms." He considers intelligent humanoids highly unlikely and the popular expectation of such to be a case of a "limited line of imagination." Whereas the humanoid body shape has evolved only once on our planet (all the examples we know of being closely related), wormlike creatures have developed independently multiple times and inhabit almost every available ecosystem. He lists eight different categories of worms, and this catalog isn't exhaustive.

If we found worms of some type on another planet, what are the chances of their being intelligent? It's hard to imagine them with any kind of material technology in the absence of hands, tentacles, or other manipulative organs. But are such organs essential to the evolution of intelligence as we know it? It's widely believed that dolphins have near-human intelligence, and they don't possess manipulative appendages.

Tangentially, speaking of imagination, a two-page essay in this issue titled "Why We're Susceptible to Fake News—and How to Defend Against It," by one of the magazine's editors, conflates confirmation bias and the tendency to rationalize away evidence that might disprove one's entrenched beliefs with the mind-set of childhood make-believe scenarios. According to two psychologists quoted in the essay, Mark Whitmore and Eve Whitmore (there's no mention of whether they're related to each other), childhood beliefs absorbed from one's parents are said to be reinforced "as rationalization piles on top of rationalization over the years." This unfortunate outcome is allegedly made worse by the supposed fact that "Children's learning about make-believe and mastery of it becomes the basis for more complex forms of self-deception and illusion into adulthood." Parents unwittingly teach children "that sometimes it's okay to make believe things are true, even though they know they are not." It's hard to read this egregious misconception about the nature and value of imagination without screaming in outrage. From a fairly early age, children know the difference between fantasy "pretend play" and lies. Furthermore, fans of fantasy and other kinds of speculative fiction are less vulnerable to "self-deception" in relation to their preferred reading material than fans of "realistic" fiction. Readers of novels about extravagant success or exotic romance may indulge in (usually harmless) daydreams about the prospects of such events happening in their own lives. Fans of stories about supernatural beings, alternate worlds, distant planets, or the remote future aren't likely to expect to encounter such things firsthand. In AN EXPERIMENT IN CRITICISM, C. S. Lewis labels this kind of reading "disinterested castle-building" as distinct from the normal "egoistic castle-building" of imagining one's real-life self in the position of the hero or heroine of a "realistic" novel and the pathological version of the latter, where the subject obsessively fantasizes about becoming a millionaire or winning the ideal romantic partner without making the slightest real-life effort to achieve those goals. The authorities quoted in that SKEPTICAL INQUIRER article seem to compare all fantasy play to the third category.

One more item of interest: The Romance Reviews website is holding a month-long promotional event throughout November. I'll be giving away a PDF of my story collection DAME ONYX TREASURES (fantasy and paranormal romance):

The Romance Reviews

Margaret L. Carter

Carter's Crypt

Thursday, November 02, 2017

The Plausibility of Modern Legends

I subscribe to the magazine SKEPTICAL INQUIRER, which I highly recommend to fans and writers of SF and fantasy. Its coverage of myths, legends, and hoaxes offers lots of story seeds and can help authors ensure that their characters respond rationally to incredible events rather than acting overly gullible. The latest issue contains a review of a new book about the Loch Ness Monster. I would like to believe in the monster (alas, the only mark of its presence we saw on our one-hour Loch Ness cruise during a tour of Scotland was a steep hill where Nessie was supposed to have slid down into the lake). Everything I've read about it, though, seems to support the position that the reported sightings in modern times comprise a combination of mistaken perceptions and deliberate photographic hoaxes. That a breeding population of large animals could survive in a confined area with no physical evidence being found after decades of searching does seem unlikely. (If the monster weren't a natural animal but an intelligent, magical creature, as in Jean Lorrah's Nessie series, that would be a different matter.)

Bigfoot (which I'd also love to believe in) seems more plausible. If Sasquatches existed, they'd be a small breeding population of a near-extinct species of primate, a very few individuals living in a vast tract of millions of acres of forest in the Pacific Northwest. There's nothing inherently unlikely about their existence being real but unproven, since they would have a strong motivation to remain hidden.

On the other hand, while I certainly believe life exists elsewhere in the universe, I reluctantly disbelieve all UFO "evidence" I've read about. Sightings and photographs have been convincingly debunked. As for the personal narratives of face-to-face contact and abductions, they sound like attempts at writing science fiction by people who don't know much of anything about science fiction. They don't make sense in terms of motivation. If aliens advanced enough to travel here from other stars wanted to make contact with us, maybe to pass on their wisdom and save us from extinction, wouldn't they reveal themselves openly to people in a position to change the world? Would beings of superior intelligence and unimaginably powerful technology make contact with an alien planet by grabbing random inhabitants whose reports are certain to be disbelieved? And if the aliens wanted to observe us without being noticed, they'd surely have the ability to do so.

Now, maybe they're observing us and don't care about remaining unseen. Maybe they're gradually accustoming us to their presence, like Jane Goodall with the chimpanzees. In that case, though, the alleged abductions don't make sense; the events as reported couldn't be telling the aliens anything about us they don't already know.

Slightly more plausible motivations: Earth is under galactic quarantine; visits to our solar system are forbidden under the alien equivalent of the Prime Directive. The briefly, ambiguously glimpsed craft in the sighting reports aren't supposed to be here. They're interstellar smugglers or other shady characters taking refuge from pursuit in a forbidden zone. As for the abductions, if they actually happened, I can think of only one credible explanation—the aliens are just messing with our heads. Either the rogue visitors are playing random pranks in a spirit of cruel fun, or extraterrestrial scientists are conducting psychological experiments on us inferior beings to find out how our culture will interpret this irrational behavior on the part of superior entities.

Margaret L. Carter

Carter's Crypt

Thursday, March 02, 2017

Skeptical Thinking

Two articles in the March-April issue of SKEPTICAL INQUIRER deal with critical, scientific-oriented mental habits, which are usefully relevant to thoughtful world-building. I subscribe to this magazine, which tackles pseudoscientific beliefs and theories of all types, mainly because exploration of topics such as UFOs, Bigfoot, poltergeists, and many other subjects in the fields of the paranormal and cryptozoology can yield story ideas (and also keep fictional characters who encounter such phenomena from seeming too gullible, if they're aware of the major arguments against, say, telepathy or channeling spirits). Some articles do take a blatantly anti-religious stance, but not enough to put me off the magazine as a whole. "Skepticism" doesn't mean "cynicism" or stubbornly doubting everything. As used in this publication, it means keeping an open mind, asking questions, and being ready to change one's beliefs as evidence demands.

The parent organization that publishes SKEPTICAL INQUIRER is here:

Center for Inquiry

"Why We Believe—Long After We Shouldn't," by Carol Tarvis and Elliot Aronson, analyzes the well-known phenomenon of cognitive dissonance. Once we've made up our minds on a topic, further information that contradicts or invalidates our belief or position makes us uncomfortable. The idea that we've made a mistake in holding a certain belief threatens to undermine our self-concept as intelligent, informed, morally upright people. We tend to pay more attention to and give more credence to data that support our position (confirmation bias). Social media exacerbate this problem. As everyone knows, Facebook (for instance) makes it easy to control our feed so that we end up in a bubble where we encounter only information that agrees with the beliefs we already embrace. Confronting evidence that we made a mistake in choosing the last car we bought (one of the authors' examples) and consoling ourselves by seeking out facts that reinforce our original high opinion of the vehicle is one thing. Letting confirmation bias rule us in matters such as politics or religion is more serious. This article uses the metaphor of a pyramid to illustrate how confirmation bias can drive people on opposite sides of an issue further apart. Imagine two people starting near the top of the pyramid, pretty close together. Often, at this point, "we are faced not with a clear go-or-no-go decision, but instead with ambiguous choices whose consequences are unknown or unknowable." Forced to make a decision, often an "impulsive" one, "we justify it to reduce the ambiguity of that choice." The more actions we take to justify our commitment to that initial choice, the nearer to the bottom of the pyramid we move, so that the two people who started close together at the top end up getting further and further apart. The authors acknowledge that "it's good to hold an informed opinion and not change it" every time a possible objection comes along. At the same time, though, it's "essential to be able to let go of that opinion when the weight of the evidence dictates." I'm reminded of C. S. Lewis's discussion of faith, which, he explains, doesn't mean blindly believing apparently impossible things. It means that once we've reached a certain belief (in his example, in God) for what we consider good reasons, we should stick to that belief unless we encounter solid evidence to disprove it, not let every adverse life event or shift in our emotions override our rational commitment.

"The Virtuous Skeptic," by Massimo Pigliucci, outlines the ethical principles a person intelligently seeking truth should embrace. Humility—knowing one's limitations and recognizing what kinds of expertise are needed to produce an informed opinion on any particular question—heads the list. The author lays out a table of "epistemic virtues"—curiosity, honesty, objectivity, parsimony (Occam's Razor), etc.—and the opposite "epistemic vices"—closed-mindedness, dogmatism, gullibility, self-deception, etc. The article ends with a list of questions we should ask ourselves, which apply well to any argument, scientific or not (slightly paraphrased and shortened): Did I carefully consider my opponent's arguments instead of dismissing them? Did I interpret my opponent's statements in the most charitable way possible (very important in politics!)? Did I entertain the possibility that I could be wrong? Am I an expert in this area, and, if not, have I consulted experts? Did I check the reliability of sources? Finally, "do I actually know what I'm talking about, or am I simply repeating somebody else's opinion?"

Critical thinking is hard work!

Margaret L. Carter

Carter's Crypt