Cory Doctorow's latest LOCUS column continues his topic from last month, the sharp divide between the artificial intelligence of contemporary technology and the self-aware computers of science fiction. He elaborates on his arguments against the possibility of the former's evolving into the latter:Past Performance
He explains current machine learning "as a statistical inference tool" that "analyzes training data to uncover correlations between different phenomena." That's how an e-mail program predicts what you're going to type next or a search engine guesses your question from the initial words. An example he analyzes in some detail is facial recognition. Because a computer doesn't "know" what a face is but only looks for programmed patterns, it may produce false positives such as "doorbell cameras that hallucinate faces in melting snow and page their owners to warn them about lurking strangers." AI programs work on a quantitative rather than qualitative level. As remarkably as they perform the functions for which they were designed, "statistical inference doesn’t lead to comprehension, even if it sometimes approximates it." Doctorow contrasts the results obtained by mathematical analysis of data with the synthesizing, theorizing, and understanding processes we think of as true intelligence. He concludes that "the idea that if we just get better at statistical inference, consciousness will fall out of it is wishful thinking. It’s a premise for an SF novel, not a plan for the future."
While I'd like to believe a sufficiently advanced supercomputer with more interconnections, "neurons," and assimilation of data than any human brain could hold might awaken to self-awareness, like Mike in Heinlein's THE MOON IS A HARSH MISTRESS, I must admit Doctorow's argument is highly persuasive. Still, people do anthropomorphize their technology, even naming their Roomba vacuum cleaners. (I haven't done that. Our Roomba is a low-end, fairly dumb model. Its intelligence is limited to changing direction when it bumps into obstacles and returning to its charger when low on power, which I never let it run long enough to do. But nevertheless I give the thing pointless verbal commands on occasion. It doesn't listen to me much less than the cats do, after all.) People carry on conversations with Alexa and Siri. I enjoy remembering a cartoon I saw somewhere of a driver simultaneously listening to the GPS apps on both the car's system and the cell phone. The two GPS voices are arguing with each other about which route to take.
Remember Eliza, the computer therapist program? She was invented in the 1960s, and supposedly some users mistook for a human psychologist. You can try her out here:Eliza
As the page mentions, the dialogue goes best if you limit your remarks to talking about yourself. When I tried to engage her in conversation about the presidential election, her lines quickly devolved into, "Do you have any psychological problems?" (Apparently commenting that one loathes a certain politician is a red flag.) So these AI therapists don't really pass the Turing test. I've read that if you state to one of them, for instance, "Einstein says everything is relative," it will probably respond, "Tell me more about your family." Many years ago, when the two youngest of our sons were preteens, we acquired a similar program, very simple, which one communicated with by typing, and it would type a reply that the computer's speaker would also read out loud. The kids had endless fun writing sentences such as, "I want [long string of numbers] dollars," and listening to the computer voice retort with something like, "I am not here to fulfill your need for ten quintillion, four quadrillion, nine trillion, fifty billion, one hundred million, two thousand, one hundred and forty-one dollars."
Margaret L. CarterCarter's Crypt