Cory Doctorow's latest LOCUS essay explains why he's an "AI skeptic":Full Employment
He believes it highly unlikely that anytime in the near future we'll create "general AI," as opposed to present-day specialized "machine learning" programs. What, no all-purpose companion robots? No friendly, sentient supercomputers such as Mike in Heinlein's THE MOON IS A HARSH MISTRESS and Minerva in his TIME ENOUGH FOR LOVE? Not even the brain of the starship Enterprise?
Doctorow also professes himself an "automation-employment-crisis skeptic." Even if we achieved a breakthrough in AI and robotics tomorrow, he declares, human labor would be needed for centuries to come. Each job rendered obsolete by automation would be replaced by multiple new jobs. He cites the demands of climate change as a major driver of employment creation. He doesn't, however, address the problem of retraining those millions of workers whose jobs become superseded by technological and industrial change.
The essay broadens its scope to wider economic issues, such as the nature of real wealth and the long-term unemployment crisis likely to result from the pandemic. Doctorow advances the provocative thesis, "Governments will get to choose between unemployment or government job creation." He concludes with a striking image:
"Keynes once proposed that we could jump-start an economy by paying half the unemployed people to dig holes and the other half to fill them in. No one’s really tried that experiment, but we did just spend 150 years subsidizing our ancestors to dig hydrocarbons out of the ground. Now we’ll spend 200-300 years subsidizing our descendants to put them back in there."
Speaking of skepticism, I have doubts about the premise that begins the article:
"I don’t see any path from continuous improvements to the (admittedly impressive) 'machine learning' field that leads to a general AI any more than I can see a path from continuous improvements in horse-breeding that leads to an internal combustion engine."
That analogy doesn't seem quite valid to me. An organic process (horse-breeding), of course, doesn't evolve naturally into a technological breakthrough. Development from one kind of inorganic intelligence to a higher level of similar, although more complex, intelligence is a different kind of process. Not that I know enough of the relevant science to argue for the possibilities of general AI. But considering present-day abilities of our car's GPS and the Roomba's tiny brain, both of them smarter than our first desktop computer only about thirty years ago, who knows what wonders might unfold in the next fifty to a hundred years?
Margaret L. CarterCarter's Crypt
Post a Comment