Showing posts with label Locus. Show all posts
Showing posts with label Locus. Show all posts

Thursday, January 04, 2024

AI as a Bubble

Cory Doctorow's latest LOCUS column analyzes AI as a "tech bubble." What Kind of Bubble Is AI?

Although I had a vague idea of what economists mean by "bubble," I looked it up to make sure. I thought of the phenomenon as something that expands quickly and looks pretty but will burst sooner or later. The Wikipedia definition comes fairly close to that concept: "An economic bubble (also called a speculative bubble or a financial bubble) is a period when current asset prices greatly exceed their intrinsic valuation, being the valuation that the underlying long-term fundamentals justify." The term originated with the South Seas Bubble of the early eighteenth century, involving vastly inflated stocks. The Dutch "tulip mania" of the seventeenth century offers another prominent example.

Doctorow takes it for granted that AI fits into this category. He begins his essay with, "Of course AI is a bubble. It has all the hallmarks of a classic tech bubble." He focuses on the question of what KIND of bubble it is. He identifies two types, "The ones that leave something behind, and the ones that leave nothing behind." Naturally, the first type is desirable, the second bad. He analyzes the current state of the field with numerous examples, yet always with the apparent underlying assumption that the "bubble" will eventually "pop." Conclusion: "Our policymakers are putting a lot of energy into thinking about what they’ll do if the AI bubble doesn’t pop – wrangling about 'AI ethics' and 'AI safety.' But – as with all the previous tech bubbles – very few people are talking about what we’ll be able to salvage when the bubble is over."

This article delves into lots of material new to me, since I confess I don't know enough about the field to have given it much in-depth thought. I have one reservation about Doctorow's position, however -- he discusses "AI" as if it were a single monolithic entity, despite the variety of examples he refers to. Can all possible levels and applications of artificial intelligence be lumped together as components of one giant bubble, to endure or "pop" together? Maybe those multitudes of different applications are what he's getting at when he contemplates "what we'll be able to salvage"?

Margaret L. Carter

Please explore love among the monsters at Carter's Crypt.

Thursday, November 16, 2023

Bad People Versus Bad Institutions

In his latest LOCUS essay, Cory Doctorow discusses whether "all the internet services we enjoyed and came to rely upon became suddenly and irreversibly terrible – as the result of moral decay." Setting aside the question of whether "irreversibly terrible" is a bit exaggerated, he reasonably states that "it’s tempting to think that the people who gave us the old, good internet did so because they were good people," and the internet was ruined, if it was, by bad people:

Don't Be Evil

The problem isn't that simple, however, since institutions, not individuals, created the internet. On the other hand, institutions comprise many individuals, some with honorable motives and some driven solely by the quest for profit. In short, "institutional action is the result of its individuals resolving their conflicts." Can corporations as such be evil? Doctorow doesn't seem to be saying that's the case. Every institution, private or public, includes multitudes of people, with conflicting goals, some good and some bad -- both the individuals and their goals. Moreover, as he doesn't explicitly mention, some people's characters and motivations are neither all good nor all bad. Many drift along with the corporate culture from fear of the consequences of resistance or maybe just from failure to think through the full implications of what's going on. He does seem to be suggesting, however, that vast, impersonal forces can shape negative outcomes regardless of the contrary wishes of some people involved in the process. "Tech didn’t get worse because techies [workers in the field] got worse. Tech got worse because the condition of the external world made it easier for the worst techies to win arguments."

What solutions for this quandary could be tried, other than "burn them [the allegedly villainous "giants of the internet" such as Amazon and Google] to the ground," in my opinion a bit too drastic? Doctorow insists, "A new, good internet is possible and worth fighting for," and lists some aspects he believes must change. Potential avenues for improvement can be summarized by the need to empower the people who mean well -- the ones Doctorow describes as "people within those institutions who pine for a new, good internet, an internet that is a force for human liberation" -- over those who disregard the concerns of their customers in single-minded greed for profit.

On the wider topic of individual responsibility for the villainous acts of institutions over which one doesn't have any personal control, one might be reminded of the contemporary issue of reparations to historically oppressed groups. Of course, one can quit a job and seek a more ethical employer, but renouncing one's nationality or ethnic ancestry would be severely problematic. However, since that subject veers into "modpol" (modern politics, as strictly banned on an e-mail list I subscribe to), I'll simply point out C. S. Lewis's essay, in a different context, about repenting of other people's sins:

Dangers of National Repentance

Margaret L. Carter

Please explore love among the monsters at Carter's Crypt.

Thursday, September 14, 2023

AI Compositions and Their Influence on Letters as Signals

In Cory Doctorow's latest column, he brings up a potential unintended byproduct of overusing "large language models," aka chatbots such as ChatGPT:

Plausible Sentence Generators

He recalls a recent incident when he wrote a letter of complaint to an airline, threatening to sue them in small claims court, and fed it to such a program for rewriting. He was surprised at the high quality of the result. The site changed his pretty good "legal threat letter" into a noticeably stronger "vicious lawyer letter."

Letters of that type, as well as another of his examples, letters of recommendation from college professors, are performative. They transmit not only information but "signals," as Doctorow puts it. A stern letter from a lawyer sends the message that somebody cares enough about an issue to spend a considerable amount of money hiring a professional to write the letter. A recommendation from a professor signals that the he or she considers the student worthy of the time required to write the recommendation.

One of Spider Robinson's Callahan's Bar stories mentions a similar performative function that shows up in an oral rather than written format, spousal arguments. The winner of the argument is likely to be the one who dramatizes his or her emotional investment in the issue with more demonstrative passion than the other spouse.

In the case of written performances, Doctorow speculates on what will happen if AI-composed (or augmented) epistles become common. When it becomes generally known that it's easy and inexpensive or free to write a letter of complaint or threat, such messages won't signal the serious commitment they traditionally do. Therefore, they'll become devalued and probably won't have the intended impact. The messages (like form letters, though Doctorow doesn't specifically mention those) will lack "the signal that this letter was costly to produce, and therefore worthy of taking into consideration merely on that basis."

I'm reminded of the sample letters to congresscritters included in issues of the MILITARY OFFICER magazine whenever Congress is considering legislation that will have serious impact on members of the armed services and their families. These form letters are meant to be torn out of the magazine, signed, and mailed by subscribers to the presiding officers of the House and Senate. But, as obvious form letters, they clearly don't take much more effort than e-mails -- some, because envelopes must be addressed and stamps affixed, but not much more. So how much effect on a legislator's decision can they have?

Miss Manners distinctly prefers old-fashioned, handwritten thank-you notes over e-mailed thanks because the former show that the recipient went to a certain amount of effort. I confess I do send thank-you notes by e-mail whenever possible. The acknowledgment reaches the giver immediately instead of at whatever later time I work up the energy to getting around to it. So, mea culpa, I plead guilty! However, the senders of the gifts themselves have almost completely stopped writing snail-mail letters, so in communication with them, e-mail doesn't look lazy (I hope), just routine. Context is key.

Margaret L. Carter

Please explore love among the monsters at Carter's Crypt.

Thursday, July 13, 2023

Predicting versus Contesting

Few, if any, readers and writers of science fiction believe it exists to predict the future. Strikingly on-target foretellings of future events and technology are occasional, serendipitous accidents. Rather, it speculates on the questions "What if...." and "If this goes on...." Cory Doctorow's latest LOCUS essay delivers a slightly different, more radical perspective on what science fiction does:

SF Doesn't Predict

This article consists of the text of a speech he gave in June 2023, when receiving an Honourary Doctor of Laws from York University’s Faculty of Liberal Arts and Professional Studies in Toronto. He begins with an anecdote from his educational career. At the age of seventeen, already professionally selling short science fiction, he inquired at York University's humanities department about getting into the creative writing program. He was turned down because, as he was told, "they only teach literature." I had a similar, although less blunt and final experience, as an undergraduate. After taking the introductory course in creative writing, I enrolled in an advanced, workshop-type fiction writing course. At the end of the first semester, the professor hesitated to let me into the second semester because I'd submitted only fantasy and horror. He reluctantly let me continue, and I dutifully wrote a slice-of-life story about a military wife coping with a toddler and a baby while her husband was deployed. Nobody could have asked for a more spot-on "write what you know" work. As far as I can recall, it was an okay story and certainly didn't lack vividness or realism. But that wasn't the path I wanted to follow; the marketplace abounds in writers of realistic fiction, and I knew I'd never measure up to most of them. While I sometimes enjoy reading about contemporary settings and characters with no trace of the fantastic, I have no interest in trying to write that genre. (Yes, even though it claims the status of "mainstream," it's a genre.)

Doctorow later rejoiced in belonging to a community, the tech realm, whose members didn't view his science-fiction output with disdain. Rather, he "was surrounded by people who thought that SF writing was literally the coolest thing in the world." The rest of this blog explains why he agrees.

He defines optimism and pessimism as "just fatalism in respectable suits. . . .Both deny human agency, that we can intervene to change things." He subsumes both under the category of "inevitabilism, the belief that nothing can change." This attitude, according to Doctorow, is "the opposite of SF," whose purpose is to imagine alternatives. What it contests is the assumption that there's no alternative to the status quo or the predicted future, that "resistance is futile." He lays out several examples, climaxing with his metaphor of a bus speeding toward the brink of a canyon--unless we take the risk of swerving. The essay concludes, "Hope begins with the ability to imagine alternatives. And there is always an alternative."

That affirmation reminds me of something that irritates me about the fantasy and SF shows I watch on the CW network. A continually recurring line of cliched dialogue laments, "We haven't got a choice!" (I've often wondered whether the same writers compose the scripts for all of those series.) I keep wanting to yell at the screen, "Yes, you featherbrain, you always have a choice."

Margaret L. Carter

Carter's Crypt

Thursday, May 18, 2023

When Do Conspiracy Theorists Make Sense?

Cory Doctorow's column in the May issue of LOCUS discusses what we'd probably think of as paranoid conspiracy theorists, vehemently protesting imaginary global plots against our lives and liberties, who in Britain are often informally labeled "swivel-eyed loons."

The Swivel-Eyed Loons Have a Point

Of course, we do have to distinguish the paranoid looniness from valid concerns. As he says, we all want to "save the children." Most of us, however, want to save them from "real threats who never seem to face justice," while the swivel-eyed loons obsess about "imaginary threats," e.g., "adrenochrome-guzzling Satanists."

Some issues about which he suggests they have valid points:

Automated license-plate recorders, presently used in London, really can constitute "a form of pervasive location-tracking surveillance." That kind of power has been used in the past to target "disfavored minorities" and organizations regarded as suspicious.

While “'Climate lockdowns' are a product of a conspiracist’s fevered imagination," it's nevertheless true that COVID restrictions have sometimes served as a pretext "to control everyday people while rich people swanned around having a lovely time." The powerful were happy to promulgate regulations they didn't consider to apply to themselves.

The "post-ownership society" fearfully anticipated by some conspiracists has already begun to infiltrate the economy. Our Kindle books, music files, and other software don't really belong to us; we lease them from companies that can delete them at will.

What about the futuristic promise of a cashless society, when institutions such as credit card companies will be nearly all-powerful gatekeepers? "Access to financial services is a primary means of extralegal control over whole sectors of the economy."

Doctorow's article offers several other alarming examples.

However, he exaggerates about the demise of DVDs. Yes, we can still buy movies and TV programs (and music) in physical media that we permanently own; they aren't likely to disappear anytime soon. (The choice of renting DVDs from Netflix will go away later this year, though. Sigh.)

"We live in a fraught and perilous time," he reminds us, "and powerful people really do want to capitalize on this situation to enrich themselves at our expense." How can we enjoy the benefits of technologies such as those mentioned in his essay while avoiding their threats to our privacy and autonomy?

Margaret L. Carter

Carter's Crypt

Thursday, March 30, 2023

One Bite at a Time

Cory Doctorow's column for the March 2023 issue of LOCUS, for once, asserts a position I can support without reservation:

End-to-End

Concerning the many problems involved in making the internet user-friendly, a quest for perfection may result in no improvement at all. As Doctorow summarizes the situation, "The internet succeeded where other networks failed" because it didn't try to implement a "seemingly monolithic technological project" that would require all parties to agree on an ultimate solution that would deal with all difficulties once and for all. Instead, find one small element that everyone can accept. "Build that, then do it again, finding another step that everyone can get behind." In other words, figuratively speaking, eat the elephant one bite (or byte?) at a time. To quote Doctorow again, "I want a better internet now, not years down the road. I’ll happily take a smaller bite."

The main issue to which his current column applies this approach is the end-to-end principle, an older name for what's now usually called net neutrality. In brief, "when a willing speaker wants to say something to a willing listener, our technology should be designed to make a best effort to deliver the speaker’s message to the person who asked to get it." After decades of development of the internet, why don't we have this transparently obvious, user-friendly system?

When we ask a question with Google, why does it prioritize its own search engine's results over those of others that might be more relevant to the questioner's needs? When we search for a specific book or other product on Amazon, why do several other products pop up at the top of the page ahead of the one we typed in the search box? Why do Facebook posts from people and organizations we actually want to hear from get drowned in a sea of sponsored posts? Well, yeah, money and profit (duh). But why are such practices legally permitted? Why is Facebook allowed to restrict our access to posts from users we've liked or followed by blackmailing them into "boosting" their posts—paying to have their material seen by people who've expressed a wish to see it? Suppose when we tried to telephone a local business, the phone company routed the call to a rival business that had paid for the privilege? Nobody would stand for that, yet the equivalent happens online all the time.

Doctorow suggests examples of a few modest rules that internet companies should be required to follow: E.g. “The first result for a search should be the product that most closely matches the thing I searched for” and “If I subscribe to your feed, then when you publish something, it should show up in my feed.”

For a long time I was puzzled that my posts on my Facebook author page showed such low numbers of "Reach." The page doesn't have a huge throng of followers, but it certainly has a lot more than those being "reached." It was a shock to learn that in order to be read by more than a handful of followers, those posts needed to be boosted. In other words, I would have to bribe Facebook to carry out the function it purports to perform, connecting senders with willing receivers. Likewise, it's a constant, though minor irritant that searching for a book on Amazon often connects to a page where I have to scroll halfway down to find the desired item. According to Doctorow, the volume of ads and sponsored posts is delicately designed to stay "just below the threshold where the service becomes useless to you." I fear he may be right.

Will the limited ideal of his online utopia ever become a reality? Maybe not, but it's worth discussing.

Margaret L. Carter

Carter's Crypt

Thursday, January 19, 2023

The Fates of Social Networks

Cory Doctorow's latest LOCUS column explores the breakdown of social networking sites, which he seems to believe is the inevitable culmination of their life cycles:

Social Quitting

He focuses on Facebook and Twitter. Are they doomed to go the way of their predecessors such as MySpace? They've had a longer run, but he thinks they, too, are in the process of changing from "permanent to ephemeral."

Personally, I don't expect Facebook to fade away anytime soon like previous services that imploded "into ghost towns, then punchlines, then forgotten ruins." I can't speak about Twitter, since I've never joined it and, given the current turmoil surrounding it, I don't plan to, even though lots of authors make productive use of it. Mainly, I can't imagine myself conjuring up cogent, entertaining tweets several times a day, which seems to be the criterion for using Twitter effectively. I had a MySpace account during the height of its popularity. The site struck me as a visually exhausting mess, dominated by flashy ads and hard to comprehend or navigate. Also, if anybody I knew used it, I never managed to connect with them. I joined Facebook because it became the only reliable way to keep track of many of our contemporary and younger relatives. (People who ignore e-mails will often answer Facebook messages.) Later, numerous organizations and businesses I wanted to keep up with established dedicated Facebook pages.

Doctorow analyzes these "network effects," summarized as, "A system has ‘network effects’ if it gets more valuable as more people use it." Facebook's attraction of more and more customers has a snowballing effect; people want to go where other people they know are. When the volume of users reaches critical mass, the "switching cost" becomes prohibitively high for most customers. Leaving the service becomes more trouble than it's worth. As long as the benefits of the service outweigh disadvantages such as becoming the object of targeted advertising, most people who've grown used to the advantages will stick around. But, as Doctorow explains the current situation, social media platforms shift more of their value—the "surplus," in economics terminology—to advertisers rather than users. Later, they tend to get greedy and make things difficult for advertisers, too. Then the "inverse network effects" kick in: The greater number of customers and advertisers that quit the network, the less value exists for those who stay, so even more leave.

Although Doctorow doesn't use the term, his explanation reminds me of the "sunk cost" principle. If we've already poured a lot of time, money, or energy into something, we're reluctant to give up on it. We continue to invest in it because otherwise our previous efforts would seem "wasted."

In my opinion, although based on my own probably limited experiences and interests, Doctorow exaggerates as far as Facebook is concerned. I have no intent of abandoning it in the foreseeable future. Our relatives and real-world friends who use the service haven't begun to disappear. (In fact, one who stopped several years ago has come back.) Local businesses still post updates there. Our church has an active page we rely on. My various writing-related groups continue to thrive. As for the advertising, it doesn't bother me. How hard is it to scroll down to the next post? Besides, some ads alert me to products such as new books that might actually interest me. The occasionally outright spooky knowledge of my habits and interests many websites display (how does the weather page know what I recently searched for on Amazon?) has a definite downside in terms of privacy concerns. However, it also offers advantages by way of customizing and streamlining the user's internet experience. And how can I legitimately complain about Facebook advertising when I use the site to promote my own books?

In short, there must be enough people and organizations among my contacts who are as change-averse as I am, to maintain the site's value for me. And I can't believe I'm alone in that position.

Margaret L. Carter

Carter's Crypt

Thursday, November 17, 2022

The Purpose of Horror

What is horror fiction (whether in print or on film) good for? My parents certainly took a dim view of my fervent interest in the genre, beginning at the age of twelve with my first reading of DRACULA. A familiar physiological or biochemical hypothesis proposes that reading or viewing horror serves the same purpose as riding a roller coaster. We enjoy the adrenaline rush of danger without having to expose ourselves to any real risk. Personally, I would never get on a roller coaster except at gunpoint, to save someone else's life, or to earn a lavish amount of money. I'm terrified of anything that feels like falling and don't like any kind of physical "thrill" experience. Yet I do enjoy the vicarious fears of the horror genre. Maybe real-life thrill rides or extreme sports feel too much like actual danger for my tolerance level, whereas artistic terror feels controllable.

H. P. Lovecraft famously asserts, "The oldest and strongest emotion of mankind is fear, and the oldest and strongest fear is fear of the unknown." Therefore, horror is a legitimate subject for art, even though he believes its appeal is restricted to a niche audience. We might link Lovecraft's thesis to the physiological model, in that the feared unknown becomes manageable when confined within the boundaries of a story.

In DANSE MACABRE, Stephen King suggests that all horror fiction has roots in our fear of death. Embodying the threat of death in the form of a monster entails the hope that it can be defeated. I think it's in 'SALEM'S LOT that a child character says, "Death is when the monsters get you."

In an interview in the October 2022 LOCUS, author Sarah Gailey maintains that "horror is designed to put the reader in touch with an experience of the body, where that experience is one that they typically would not wish to have." Our culture separates body and mind from each other, while, Gailey says, "Horror serves to remind us that those things aren’t separate. The ‘I’ who I am is absolutely connected to the physical experience of my body and the danger that body could face in the world, and horror does an incredible job of reminding readers that we live in bodies, we live in the world, and we are creatures."

This comment reminds me of C. S. Lewis's remark that the truth of our nature as a union of both the spiritual and the physical could be deduced from the existence of dirty jokes and ghost stories. Bawdy humor implies that our having fleshly bodies is somehow funny, shameful, or incongruous. No other species of animal seems to find it funny just to be the kind of creature it is. Supernatural horror highlights the sense that separation of body and soul, which should form a single, unified entity, is deeply unnatural. Hence we get the extremes of zombies (soulless yet animated bodies) and ghosts (disembodied spirits).

Margaret L. Carter

Carter's Crypt

Thursday, November 10, 2022

Corporate Bullies and Copyright

Cory Doctorow's article for the November 2022 LOCUS discusses the ever-increasing reach of monopolies that prey on the work of writers and other content creators, in terms of a parable about bullies stealing lunch money. If the victims get more lunch money, they don't get more food; the bullies get more money. No matter how much artistic creators produce and theoretically earn, the greed of the rights-grabbers will never be sated:

Structural Adjustment

Doctorow reminds us that only five (maybe, soon, four) major publishing conglomerates exist and that the realms of physical bookselling, online retailing and e-book sales, book distribution, and music production are each dominated by one mega-corporation. "Publishing and other 'creative industries' generate more money than ever — and yet, despite all this copyright and all the money that sloshes around as a result of it, the share of the income from creative work that goes to creators has only declined." In book publishing, unless an author chooses to self-publish (or go with small independent presses, which he doesn't mention in this article), "Contracts demand more — ebook rights, graphic novel rights, TV and film rights, worldwide English rights — and pay less." And of course the major online retailers exercise their dominance over self-publishers' access to markets.

He summarizes in terms of his parable, "We’re the hungry school kids. The cartels that control access to our audiences are the bullies. The lunch-money is copyright."

Asserting, "Cartels and monopolies have enacted chokepoints between creators and audiences," Doctorow recommends a book, CHOKEPOINT CAPITALISM: HOW BIG TECH AND BIG CONTENT CAPTURED CREATIVE LABOR MARKETS AND HOW WE'LL WIN THEM BACK, and gives an example of one of the strategies recommended in it.

While I understand his points and recognize the dangers he often cites in his articles, as a reader (and online consumer in general) I would have trouble getting along without Amazon. It's a great boon to be able to find almost any book, no matter how obscure and long out of print. I value being able to acquire the complete backlist of almost any author I'm interested in. I enjoy having purchases delivered to our doorstep, since the older I get, the less I want to go out searching for items —- especially given the not-unlikely frustration of not finding what I want in stock locally. And I trust Amazon to fill orders reliably and handle credit information securely, rather than my taking the risk of buying from websites unknown to me. As an author, if I decide to self-publish a work, I like being able to upload it for free on the most popular e-book seller's site, plus other retailers through Draft2Digital. At the same time, I realize Doctorow isn't wrong that by embracing convenience and economy, we put ourselves at the mercy of the provider's whims. For one thing, buying a product in electronic form (e-book, music file, movie, etc.) means the seller can make it evaporate from the consumer's hard drive or tablet anytime. So what's the ideal solution? I don't know.

Margaret L. Carter

Carter's Crypt

Thursday, September 15, 2022

The Meaning of Money

What gives money (or any "moneylike" form of currency) its value? What makes us willing to accept it in exchange for concrete items of value? Cory Doctorow dissects this conundrum in his latest LOCUS column:

Moneylike

After an attempt to define money, he explores its origin. He rejects the familiar hypothesis of its having been invented to solve the cumbersome difficulties of barter, labeling this a "folk-tale." Instead of a "bottom-up" model of the creation of media of exchange, he describes money as a "top-down" system imposed by governments, which required the existence of currency to collect taxes in order to provision their armies. Where, then, does the money itself come from? It's generated by governments, and problems can occur if the state issues either too much or too little of it. Doctorow illustrates and analyzes this model at length in an extended parable. Items other than official currency can be "moneylike," such as gift certificates. Elaborating on the concept of "moneylike" media of exchange, he goes into detail about how cryptocurrency works, especially with reference to internet ransomware.

Robert Heinlein includes a discussion of what constitutes value in STARSHIP TROOPERS, where the narrator's high-school teacher refutes the claim that labor alone creates value. Heinlein's TIME ENOUGH FOR LOVE contains an amusing scene in which Lazarus Long, acting as the banker for a frontier planet colony, destroys a batch of paper money, to the horror of the man he's dealing with. Lazarus has to explain that money doesn't consist of a physical thing with objective value, but a consensus reality people agree on. As long as Lazarus keeps a record of the serial numbers from the bills he gets rid of, there's no need to preserve the bills themselves (which pose a theft risk).

In one of Terry Pratchett's Discworld novels, the capital city adopts the Golem Standard. What could serve as a better backing for currency than objects that are almost impossible to steal, counterfeit, or destroy (especially since they're sapient and can defend themselves)?

In the Star Trek universe, conflicting information about the future economy appears in the various series. In the original series, Starfleet personnel must get paid somehow, as shown by Uhura's purchase of a tribble in "The Trouble with Tribbles." Outside of Starfleet, the existence of money is confirmed in "Mudd's Women" and the episode in which Spock poses as a Vulcan merchant. Supposedly by the time of STAR TREK: THE NEXT GENERATION the ubiquity of replicators has made the Federation a post-scarcity society with no need for money. Yet on the fringes (as in DEEP SPACE NINE) and outside the Federation's borders, as made clear by the Ferengi veneration of profit, money exists. Gold-pressed latinum as a medium of exchange is explained on the premise that it's one of the few substances incapable of being replicated. (We have to assume dilithium crystals must fall into the same category, or else obtaining them wouldn't be such a vital preoccupation in the original series.) It seems reasonable that luxury goods in the form of items not produced by replicators, such as the Picard family's wines, would require a medium of exchange for their sale. Or are we to assume creators of such products make them for the sheer joy of the process and give them away? Regardless of post-scarcity abundance, widespread actions like that would imply a radical change in human nature that we don't witness among the Terrans of the Star Trek universe in any other behavioral category.

Margaret L. Carter

Carter's Crypt

Thursday, July 14, 2022

The Crisis and the Swerve

Cory Doctorow's column in this month's LOCUS, whether you fully agree with his view of the global situation or not, displays an impressive deployment of an extended metaphor:

The Swerve

This essay compares the climate change problem to a speeding bus about to crash off a cliff. Should we grab the wheel from the driver and swerve off the road at high speed, even at the risk of a disastrous crash? As you'll notice a few sentences into the essay, Doctorow holds an extreme view of the inevitable severity of climate change. Yet he ends with an ultimately (though guardedly) optimistic conclusion that total catastrophe can still be avoided. But, in his opinion, we've come too far already to evade the damage inherent in the swerve.

This is how he describes the scenario at the beginning of the article:

"We’re all trapped on a bus. The bus is barreling towards a cliff. Beyond the cliff is a canyon plunge any of us will be lucky to survive. Even if we survive, none of us know how we’ll climb out of that deep canyon. Some of us want to yank the wheel. The bus is going so fast that yanking the wheel could cause the bus to roll. There might be some broken bones. There might be worse than broken bones. The driver won’t yank the wheel."

In Doctorow's formatting, however, each of those sentences sits on a line by itself. Arranged that way, the opening can't fail to grab a reader's attention. The alarm and urgency of his message come through loud and clear. He goes on to condemn climate change denial, express his disapproval of "incrementalism," and discuss some of the public responses to the problem, positive and negative, that have been proposed or attempted so far.

It seems to me that one significant reason why many people don't believe we're about to drive off a cliff is that climate degradation is a "slow catastrophe." It doesn't evoke immediate alarm like an asteroid on a collision course with Earth. The effects of global climate shifts sneak up on us over a span of years or decades. So those who think we still have plenty of time to deal with the crisis aren't necessarily greedy, callous, or oblivious.

Doctorow estimates that in 1992 we still had the option of "building a bridge" across the canyon. By now, he asserts, we've lost the opportunity of "averting the disaster" and instead must focus on "surviving the disaster." Still, he comes to an optimistic conclusion, for a certain value of "optimistic." He describes the potential "happy ending" in terms of the extended metaphor this way:

"We’ll swerve. The bus will roll. It will hurt. It will be terrible. But we won’t be dead on canyon floor. We’ll fix the bus. We’ll make it better. We’ll get it back on its wheels. We’ll get a better driver, and a better destination."

Margaret L. Carter

Carter's Crypt

Thursday, May 12, 2022

Writing to the Future

Cory Doctorow's latest LOCUS column, on writing nonfiction pieces that will still be relevant by the time they're published:

Six Weeks Is a Long Time

The time lag that may undercut the applicability of a written work, according to him, seems to be getting shorter. Circumstances can always truly change overnight or in an instant, of course. Consider the difference between September 10, 2001, and September 11 of that year. Yet it may seem odd to define an essay meant to be read a month and a half after it's written as "futuristic thinking." The near future, however, is still the future. As C. S. Lewis's senior demon says in THE SCREWTAPE LETTERS, all human beings constantly travel into the future at the rate of sixty minutes per hour.

I once read a story about a time-viewing machine that allows the user to look into the future. The culture-transformative feature of this device is that it has no lower limit on how short a time span it can look ahead. And apparently (if I remember correctly) one can view events in other places, not just where one happens to be personally located. Suppose you peer ten seconds into the future? You're effectively spying on people's actions in the present, in real time. (On second thought, it may have been a past-viewing device. Same principle applies.)

Doctorow wrote this month's article in the midst of a new, highly contagious COVID variant and the imminent invasion of Ukraine, addressing us "in the distant, six-week future" from his moment in the past when "the odds of nuclear Armageddon [seemed] higher than they’ve been for decades." He greets his future audience thus: "I bear glad tidings. Only six weeks ago, you, me and most everyone else we knew couldn’t imagine getting through these next six weeks. If you’re reading these words, you did the unimaginable. Six weeks and six weeks and six weeks, we eat the elephant of the unimaginable one bite at a time."

We're familiar with the question of what message we'd like to send to our past selves. There's a country song about writing a letter to "me at seventeen." But what message might you want to send to your future self? Unlike speaking to one's past self, this we can actually do. Are there important events or thoughts you might want to write down as reminders in case you've forgotten them a month, a year, or decades from now? What would you like to record as an important reminder for the citizens of your city, your country, or the world next month, next year, a decade from now, or generations later? People often do the latter with physical "time capsules." Would the things you choose to highlight turn out to be important to those future audiences or not?

Isaac Asimov wrote at least one essay predicting future technological and social advances, and surely he wasn't the only SF author to do that. Some of his predictions have come true; many haven't. An essay like that could be considered a message to future generations.

Margaret L. Carter

Carter's Crypt

Thursday, March 10, 2022

Big Tech Tyranny?

Cory Doctorow's March LOCUS column discusses tech tycoons from the perspective of monopoly and world domination. Well, that phrase may be a bit exaggerated but not totally inapplicable, considering his term "commercial tyrant":

Vertically Challenged

Is meritocracy a "delusion"? Are people such as Mark Zuckerberg (founder of Facebook) unique geniuses, or did they just get lucky? One might maintain that some sort of genius is required to recognize opportunities and take advantage of the "luck," but that's beside Doctorow's point. He argues against "vertical integration" and in favor of "structural separation." Fundamental antitrust principles should forbid mega-corporations from competing with the companies to which they sell services. "Amazon could offer virtual shelf space to merchants, or it could compete with those merchants by making its own goods, but not both. Apple could have an app store, or it could make apps, but not both."

It's easy to see his point. It would be better if Google could somehow be prevented from giving preference in search results to entities in which it has a financial interest. On the other hand, more ambiguous "liminal" cases exist, a point Doctorow himself does acknowledge. For example, "Amazon might say it gives preferential search results to businesses that use its warehouses because it can be sure that those items will be delivered more efficiently and reliably, but it also benefits every time it makes that call." Granting the second half of that sentence, I'm still not sure this practice is a bad thing. Given a choice between two identical products of equal price, I DO tend to choose the one labeled "Fulfilled by Amazon" for that very reliability, as well as speed of delivery. As for splitting off Amazon's publishing services, as he advocates, I'd be dubious. I like the way Kindle self-publishing currently works.

Doctorow also brings up problems that may require "structural integration" rather than separation, to prevent Big Tech from evading its legitimate responsibilities. He tentatively calls for "a requirement that the business functions that harm the rest of us when they go wrong be kept in-house, so that the liabilities from mismanaging those operations end up where they belong." Is there a simple answer to the dilemma of maintaining the conveniences we enjoy while preventing the abuses?

Margaret L. Carter

Carter's Crypt

Thursday, January 13, 2022

Luddites and SF

The term "Luddite" is typically applied to people who oppose technological advances. That's basically what I've always assumed the word to mean. Well, Cory Doctorow's latest LOCUS column corrects that misconception:

Science Fiction Is a Luddite Literature

Luddites were textile workers belonging to a secret society in England in the early nineteenth century, best known for destroying the newfangled equipment in textile mills. According to Doctorow, however, their primary objective wasn't the destruction of machinery. That was "their tactic, not their goal." Rather, their goal was "to challenge not the technology itself, but rather the social relations that governed its use." Doctorow details some of the local and global changes that resulted from mechanization of the textile industry. Factory owners could have used the new-model looms to improve employment conditions for skilled craftspersons. Instead, employers chose to hire fewer workers at lower wages. The Luddites imagined and agitated in favor of a different path for the industrial revolution. Doctorow gives several examples of how we, today, "are contesting the social relations surrounding our technologies."

New technology always generates social change, often with unanticipated consequences. Robert Heinlein's "The Roads Must Roll" is one story that pioneered the type of science fiction focusing not on aspects such as the technical details of how automated roads work, but on how their existence affects the society that uses them. An obvious real-world example, the automobile, had easily predicted effects on individuals' freedom of movement and the decline of passenger railroads, but few people probably foresaw the impact on courtship customs and sexual mores. With cars, the balance of power in courtship shifted from the girl, who invited the boy to "call on" her in her parents' home, to the boy, who invited the girl on a "date" outside her parents' control. And of course the automobile gave young couples more freedom for sexual experimentation than they had under the old system. Consider the telephone: In his final novel, TO SAIL BEYOND THE SUNSET, Heinlein has the narrator's father, a doctor, predict at the turn of the nineteenth to the twentieth century that telephones in the home would mean the end of house calls. When the only way to get a home doctor visit was to send someone in person to summon him, patients expected house calls only in emergencies. Once they could contact their family physician by picking up the phone, they would call for less and less urgent reasons, and doctors would soon refuse to cooperate. (This decline probably happened more slowly than Heinlein anticipated; I have a dim memory of the doctor's visiting me at home when I had measles around age five, the only time I've personally encountered a doctor's house call.)

Like the mechanical looms in the early stage of the industrial revolution, most if not all new technologies benefit some people while disadvantaging others. The ease of paying bills and performing other transactions online provides great convenience for most while disadvantaging those who can't afford a computer and internet connection or, like my ninety-year-old aunt, simply decline to adopt those "conveniences." Businesses now routinely expect customers to have internet access and hence make transactions more difficult for those who don't. Cell phones have made fast connections everywhere all the time routine, so that people are often expected to be instantly available whether they want to be or not. Moreover, as pay telephones have been used less and less, they've tended to disappear, so when anybody does need one—whether because he or she doesn't have a mobile phone or because the battery has run down or they're in a dead zone with no cell service—a phone booth is almost impossible to find. I actually "met" a person nagging me for contact information on Goodreads who accused me of lying when I said I didn't own a smart phone. (Yes, I have a cell phone for urgent or otherwise time-sensitive communication away from home, but it's a plain old "dumb" flip model.)

According to Doctorow, science fiction is a Luddite genre because both the historical movement and the fiction "concern themselves with the same questions: not merely what the technology does, but who it does it for and who it does it to."

Margaret L. Carter

Carter's Crypt

Thursday, November 04, 2021

Unimaginable?

Cory Doctorow's latest LOCUS column tackles the issue of scenarios that are allegedly impossible to imagine:

The Unimaginable

The specific scenario he discusses here is the end of capitalism. Lots of authors, he points out (including himself) have written about postcapitalist, sometimes post-scarcity societies. What's hard to imagine, he suggests, is the process of transition from the present to those hypothetical futures. Doctorow cites several examples of SF works that portray postcapitalist worlds, few of which go into detail about how those societies came about, Kim Stanley Robinson being one exception. Would the shift happen through violent revolution or gradual evolution?

Anyway, the job of science-fiction and fantasy writers is to imagine things, however wild or seemingly improbable, right? John Lennon's song "Imagine" claims "it's easy if you try" to conceive of such things as a peaceful Earth with "no possessions," no "greed or hunger," and "nothing to kill or die for." Imagining a utopia (not that I'd want to live in his, since I have doubts of the desirability of a world without countries or possessions, not to mention Lennon's anti-religious slant) may be easy, but visualizing how to get there involves a whole different order of difficulty.

Many, if not most, fictional futures, of course, aren't meant as literal predictions but as cautionary "if this goes on. . ." warnings or optimistic thought experiments in constructing societies better than our own. Few people would want to live in Orwell's NINETEEN EIGHTY-FOUR or Atwood's THE HANDMAID'S TALE. In the former, the rewriting or obliteration of history as a core policy of the despotic regime deliberately leaves the question of how Big Brother rose to power unanswerable. THE HANDMAID'S TALE (novel) offers a few glimpses of the transition but no detailed account of how we might get from here to there, while the TV series expands on these hints in extended flashbacks but still leaves many questions unanswered.

Although Edward Bellamy claimed his 1888 utopian novel LOOKING BACKWARD: 2000-1887 wasn't intended as a literal plan for political action, a movement to implement his ideas sprang up, in the form of "Nationalist Clubs" active in American politics well into the 1890s. In time Bellamy himself did get involved in this movement, which achieved some practical results before dying out. As attractive as some aspects of Bellamy's vision seem to me, I don't expect to see it become reality, although a few elements exist already—for instance, the cashless society. On the whole, though, over twenty years have passed since 2000, and we're not there yet. Bellamy's faith in the capacity of social structures to change human nature within a generation or two (abolishing greed, violence, etc.) seems naive today. I don't expect a world government such as LOOKING BACKWARD and many near-future SF novels take for granted. However, I wouldn't be surprised if a worldwide confederation similar to the EU eventually developed, but probably not in my lifetime.

One thing I especially like about S. M. Stirling's long-running Emberverse series, beginning with DIES THE FIRE, is that it depicts not only the violent collapse of civilization as we know it, along with the immediate post-apocalyptic scenario, but also the transitional phase experienced by the survivors and their rebuilding of a new society. The series follows the changed world over the course of two generations. We witness how the new world develops into neither a dystopic hellscape, an ideal utopia, nor a duplicate of the old order, but something simply different, better than the present in some ways and worse in others.

Margaret L. Carter

Carter's Crypt

Thursday, September 16, 2021

Advice on "Breaking In" to Publishing

Cory Doctorow's newest LOCUS column discusses the beginning writer's obsessive quest for tips from pros on how to get started in publishing. In particular, we love to read about how successful authors landed their first sales:

Breaking In

The major premise of this article: The publishing field changes so fast that a veteran author's story of how he or she first got accepted for professional publication isn't likely to be of any practical help today. As Doctorow puts it with reference to his own early experiences, "While I still have an encylopedic knowledge of the editorial peccadilloes of dozens of publications, most of them no longer exist, and the ones that do have been radically transformed in the intervening decades." What Doctorow supplies instead is "meta-advice," advice on where to find the best advice. According to him, novice writers can get optimal assistance by pooling their knowledge of current publishing practices and trends with other novice writers, sharing what they've discovered through researching markets and submitting to editors. "Just as a writers’ critiquing circle should consist of writers of similar ability, so too should a writers’ professional support circle consist of writers at similar places in their careers."

He does offer some general guidelines applicable to everyone, a more specific, pragmatic version of Heinlein's well-known "rules." Doctorow also narrates his own "breaking in" story with mention of several publishing veterans who assisted him, including Judith Merril. He declares that an established author's most "powerful tool for helping out new writers" is encouragement.

My first adventure in professional publication (my only previous published work being limited to short pieces in our high-school newspaper), in the late 1960s when I was just over twenty years old, certainly has little if any practical application for writers today. I didn't have the benefit of mentors or networking of any kind. I knew nothing about submitting manuscripts except that they had to be double-spaced on only one side of the paper and had to include a SASE (self-addressed, stamped envelope, for those who've never submitted a paper manuscript). My sole source of information about the industry came from the annual WRITERS' MARKET reference volume in the public library. Today's novice writers are so fortunate to have the resources of the internet. I assembled a collection of stories for a vampire anthology, wrote an introduction, and sent the package to Fawcett in New York. After a year of silence, I mailed them a humorous "haven't heard from you" greeting card. Now that I know better, I'd never think of doing such a thing. Yet they responded promptly, apologized for the long wait, and offered me a contract. In view of my total ignorance, the editor had to explain to me how anthology payments worked and how to arrange for reprint permissions. That proposal became my first book, CURSE OF THE UNDEAD, a mass market paperback.

My first professional fiction sale came about in a more conventional manner that still applies to today's markets, other than the shift from snail mail to e-mail submissions and communications. I received a call for submissions to Marion Zimmer Bradley's second Darkover anthology, FREE AMAZONS OF DARKOVER, probably because the rudimentary fan activities I'd started doing had somehow gotten me on Bradley's mailing list. The zip code on the envelope, however, was wrong, and the letter had reached me barely in time to meet the deadline, if I worked very quickly (for me—I wasn't quite as slow then as now, but I haven't been a truly fast writer since my teens). So this sale had an element of luck, too; the submission invitation could have been lost completely. Without much hope of success, I wrote a story and mailed it just in time. To my surprise, it was accepted. After that, I had stories included in numerous later Darkover anthologies. They stayed in print for many years and, for a long time, supplied my most reliable (although modest in amount) source of royalty income.

Doctorow's "advice" for beginners may be broadly summarized in the eloquent statement, "Writers blaze their own trails, finding mentors or not, getting lucky or not, agonizing and working and reworking, finding peers and lifting each other up."

Margaret L. Carter

Carter's Crypt

Thursday, August 12, 2021

Handling Editorial Feedback

Kameron Hurley's latest LOCUS column focuses on how to evaluate feedback from editors:

When Should You Compromise?

Her guiding principles are "Understand the story you are trying to tell" and "Be confident in the story you're telling." In the revision process, keep the theme, the emotional core in mind; "figure out what your story is about, and cut out anything that isn’t that – and add only bits that are in support of that story." The way she describes her process, she seldom argues with editors to justify her choices. She accepts the suggestions that take the story in the direction she wants it to go and disregards the rest. If "you don't know what the book is," she cautions, you may find yourself trying to revise in accordance with every criticism you get, even those that contradict each other, and end up in a "tailspin."

She also says she typically has to "write a significant number of words" to figure out what the story is really about. That statement slightly boggles me. Shouldn't that figuring-out happen in the outlining phase? Granted, however, many authors consider outlines constraining and need the exploratory process of actual writing in order to accomplish what "plotters" usually do in prewriting.

I've hardly ever had to grapple with the kind of overarching plot and character edits she discusses. Maybe any of my fiction that had serious problems on that level has been rejected outright, or maybe I've been fortunate enough to work through any such problems at the pre-submission stage with the help of critique partners. Most often, my disagreements with editorial recommendations have concerned details of sentence structure, word choice, and punctuation. When the latter kinds of "corrections" arise, house style usually rules, no matter how I feel about it. I consider the "Oxford comma" indispensable, but one of my former e-publishers didn't allow it except in rare cases. Worse yet, they didn't want commas between independent clauses. I gritted my teeth and allowed stories to go out into the world punctuated "wrong" by my standards. On other stylistic issues, I sometimes agree with the editor and sometimes not. If the disagreement isn't vital to me, I usually let it go to save "fights" for instances where the change makes a real difference.

Most editors, if not all, have personal quirks and fetishes. I had one who insisted "sit down" and "stand up" were redundant and wanted the adverbs omitted. Really? Do most people invite a guest to take a seat with the single word "Sit" as if speaking to a dog? I gave in except when a word indicating motion was definitely needed. Another declared that "to start to do a thing is to do it," so one should never state that a character is starting to do something. Then how does one describe an interrupted action without unnecessary wordiness? The small-press editor who published my first novel told me up front that they didn't permit reversing subject and verb in dialogue tags; if I wrote "said Jenny" instead of "Jenny said," they would automatically change it, no argument allowed. That house rule didn't bother me, although I never found out what he had against the reversal; maybe he thought it sounded too old-fashioned.

That book, a werewolf novel, was the only fiction project on which I've faced big-picture editing such as Hurley discusses. The editor warned me that the manuscript would face a merciless revision critique, which indeed it did. The pages came back to me covered in emphatic handwritten notes. I balked at only a few of his revision suggestions and went along with the vast majority. The two I remember clearly: I refused to write out the heroine's stepfather, because I felt the story needed her little sister, who couldn't exist otherwise. I kept more of the viewpoint scenes from the heroine's long-lost father, the antagonist, than the editor wanted me to delete, and later I wished I'd retained still more. (I re-inserted a little of that material when a later published reissued the book.) The result slashed the original text by almost half. The editor wrote back in obvious shock that he hadn't really expected me to make ALL those changes. Huh? How was I to know that, with (as I felt) my first chance for a professionally published book-length piece of fiction at stake? The acerbic tone of his corrections made no distinctions to indicate which changes were more important than any others.

Although I was generally pleased with the final result, I suspect the situation was, as Hurley puts it, a case where the editor "was reading (or wants to read) an entirely different book." The publisher was a horror specialty small press, and what I was really trying to write, most likely, was urban fantasy, although the term hadn't yet become widely known at that time. The editor remarked that the protagonist was the least scary werewolf he'd ever seen. Well, I didn't mean for her to be scary, except to herself. Her father, a homicidal werewolf, was intended as the source of terror. I saw the protagonist as a sympathetic character struggling with an incredible, harrowing self-transformation. The editor also didn't seem to care much for the romance subplot, which I kept intact, not wanting the heroine to appear to exist in a vacuum and already having trimmed a couple of workplace scenes at his request. In fact, I wanted to write something along the line of Anthony Boucher's classic novelette "The Compleat Werewolf," a contemporary fantasy with suspense and touches of humor, which of course (as I recognize now) didn't fit comfortably into the genre conventions of horror. Anyway, the publisher produced a nice-looking trade paperback with a fabulous cover, and I remain forever grateful for their giving me my first "break"—not to mention getting me my one and only review in LOCUS!

In any case, Kameron Hurley's closing remark deserves to be taken to heart by any author dealing with either critique partners or professional editors: "The clearer you are about the destination you want to arrive at, the easier it is to sift through all the different directions and suggestions you get from people along the way."

Margaret L. Carter

Carter's Crypt

Thursday, July 15, 2021

Monopolies and Interoperabilty

Another LOCUS article by Cory Doctorow on monopolies and trust-busting:

Tech Monopolies

He begins this essay by stating that he doesn't oppose monopolies for the sake of competition or choice as ends in themselves. He cares most about "self-determination." By this he means the individual consumer "having the final say over how you live your life." When a small handful of companies controls any given field or industry, customers have only a limited range of products or services to choose among, preselected by those companies, even if this limitation remains mostly invisible to the average consumer. Not surprisingly, Doctorow focuses on this constraint as imposed by Big Tech. He recaps the growth of "the modern epidemic of tolerance for monopolies" over the past forty years. In the present, technology giants tend to crush small competitors and merge with large ones.

To some extent, this tendency—e.g., the situation Doctorow highlights in which everybody is on Facebook because everybody else is, in a feedback loop of expansion—provides a convenience to consumers. I'm glad I can find just about anyone I want to get in touch with on Facebook. As a result of such "network effects," a system becomes more valuable the more users it has. As a reader and a bibliographer, I don't know how I'd manage nowadays if Amazon didn't list almost every book ever published. I resent the brave new broadcasting world in which I have to pay for several different streaming services to watch only a couple of desired programs on each. I LIKED knowing almost any new series I wanted to see would air on one of our hundreds of cable channels. (Yes, we're keeping our cable until they pry it out of my cold, dead remote-clicking hand.) On the other hand, I acknowledge Doctorow's point that those conveniences also leave us at the mercy of the tech moguls' whims.

Half of his article discusses interoperability as a major factor in resisting the effects of monopolies. Interoperability refers to things working together regardless of their sources of origin. All appliances can plug into all electrical outlets of the proper voltage. Any brands of light bulbs or batteries can work with any brands of lamps or electronic devices. Amazon embraces interoperability with its Kindle books by allowing customers to download the Kindle e-reading app on any device. Likewise, "all computers are capable of running all programs." For self-published writers, services such as Draft2Digital offer the capacity to get books into a wide range of sales outlets with no up-front cost. Facebook, on the other hand, forecloses interoperability by preventing users from taking their "friends" lists to other services, a problem that falls under "switching costs." If it's too much trouble to leave Facebook, similar to the way it used to be too much trouble to change cell phone providers before it became possible to keep your old phone number, consumers are effectively held hostage unless willing to pay ransom in the form of switching costs (monetary or other).

Doctorow concludes, however, with the statement that the fundamental remedy for "market concentration" isn't interoperability but "de-concentrating markets." Granting a certain validity to his position, though, how far would we willingly shift in that direction if we had to give up major conveniences we've become accustomed to?

Margaret L. Carter

Carter's Crypt

Thursday, June 10, 2021

Plotting and Discovery

In the June issue of LOCUS, Kameron Hurley writes about how she gets from the beginning of a story to the end:

Endings and Beginnings

I'm always interested in the techniques used by other writers, and Hurley's current procedure isn't quite like any I've come across before. She describes how her method changed from free-writing in a process of discovery all the way through a piece of fiction to a hybrid of freeform and outlining. Early in her career, she "began every story with a scene, an inciting incident, a mood, a situation, and wrote until [she] figured out what happened next." She ended up with "dozens and dozens of beginnings, a few middles, and not a lot of endings." As she points out, it's hard to sell beginnings and middles to publishers.

Now she free-writes the beginning, works on it until the characters and their motivations become clear, and then plots the rest of the book. She needs to write a story opening that establishes all the vital ""problems, relationships, tensions, and setups" before she can move forward. Judging from the rest of the essay, Hurley seems to be very much a character-driven rather than plot-driven writer. She finds that, for her, it's "impossible to write an ending unless the beginning works." She concludes the essay with the principle, "Get the first part right, and you'll find the ending was staring at you all along."

This method runs contrary to the common advice to write the ending first and then work out what needs to happen to get there. Even if a writer doesn't literally compose the final scene first, it's generally assumed that for effective fiction writing the author has to know the culmination all along. On the other hand, Nora Roberts, in answer to a question at a conference session where I heard her speak, claimed she didn't outline her Eve Dallas mysteries (published under the name "J. D. Robb"). She was as surprised by the twists and turns of the murder investigations as Lt. Dallas was. The notion of writing a detective story that way boggled my mind. Imagine the backtracking and revision that must be required to make all the clues fit the solution. Yet clearly this method works for Roberts, who dependably releases two Lt. Dallas "In Death" mysteries every year in addition to the Nora Roberts romances.

I'm one of those dedicated outliners Hurley mentions, who would find her old process, if not exactly "horrifying" as she puts it, distressingly inefficient. As a novice writer, I surged forward through my narratives on waves of inspiration. In my teens, writing short pieces, I found that approach could work well enough, in the sense that I finished stories. (Whether they were any good is a different matter.) Holding a short-story or novelette plot in my head from beginning to end wasn't hard. When I started trying to create novels, though, starting at the beginning and charging forward to the end resulted in often not reaching the end because I'd get bogged down in the middle. I realized I needed to know where the plot was going and the steps along the road. For the same reason, although I used to occasionally write scenes out of order (as Diana Gabaldon, a bestselling "pantser," does), I've long since switched to linear scene-by-scene composition following my outline. With my early novel-writing attempts, if I yielded to the temptation of writing the most "exciting" incidents first, I tended to get bored with the necessary filling-in work. Some "pantsers" find an outline too limiting. I feel just the opposite; the outline liberates me from the fear of getting stuck in the middle and losing interest in the project.

Regardless of one's favorite method of composition, one of Hurley's discoveries has general application: Plot doesn't consist of "what happened to people"; it's "how people respond to and influence the world around them."

Margaret L. Carter

Carter's Crypt

Thursday, May 13, 2021

Quantitative and Qualitative

Cory Doctorow's latest LOCUS column analyzes the difference between quantitative and qualitative measurements and the pitfalls of depending solely on the former:

Qualia

He begins with examples from the COVID-19 pandemic. The University of Illinois at Urbana-Champaign became the epicenter of a COVID outbreak as a result of putting too much faith in an epidemiological model produced by "a pair of physicists." (The article doesn't mention why they were chosen to work the calculations instead of specialists in epidemiology.) The predictions didn't take into account the variables of human behavior, the "qualitative" element. The article cites contact tracing as another example of similar problems. Regardless of how accurate the math based on the data may be, do the infected people trust contact tracers enough to supply reliable data? Those who work with quantitative elements such as statistics and mathematical models have to restrict their research to elements that can be quantized. As Doctorow puts it, "To do math on a qualitative measurement, you must first quantize it, assigning a numeric value to it," a difficult and dubiously reliable process. (E.g., "How intense is your pain?" I never quite know how to answer that question on a scale of one to ten.)

Quantitative disciplines, as he summarizes the issue, "make very precise measurements of everything that can be measured precisely, assign deceptively precise measurements to things that can’t be measured precisely, and jettison the rest on the grounds that you can’t do mathematical operations on it." He compares this process of exclusion to the strategy of the proverbial drunk searching for his car key under the lamppost—not because that's where he lost it, but because that's where the light is.

Doctorow applies the principle to an extended discussion of monopolies, price-fixing, collusion, and antitrust laws. As an example of the potential injustice generated by "treating all parties as equal before the law," he mentions the designation of Uber drivers as "independent contractors." When treated as equivalent to giant corporations, those drivers are forbidden to "form a collective to demand higher wages," because that's legally classified as "price-fixing."

Although Doctorow doesn't mention writers, the same absurdly imbalanced restrictions can be made to apply to them. If an authors' organization promulgates a model contract and puts pressure on publishers to adhere to it, that's prohibited as "collusion" in restraint of trade.

While, according to Doctorow, "Discarding the qualitative is a qualitative act. . . . the way you produce your dubious quantitative residue is a choice, a decision, not an equation," that doesn't mean quantitative measures are useless or inherently evil. The quest for objectivity has its legitimate role—"just because we can’t rid ourselves of the subjective, it doesn’t follow that we must abandon the objective." Reliable empirically based outcomes result from balancing the quantitative and the qualitative components of the available evidence.

Margaret L. Carter

Carter's Crypt