Human brain cells are being used in experimental "biological computers":
Biological Computer Made of Human Brain CellsA company in Melbourne called Cortical Labs has constructed a prototype of a biological AI composed of tiny human neurons (too small to see with the naked eye). Brain cells in a box are shown "responding to inputs from a nearby computer. Put simply, the neurons were learning."
This system learns faster than a non-biological AI. As a plus, it draws much less power. An early version, called DishBrain, learned to play Pong. Not on a pro level, its creator admits -- "it hit only slightly more balls than it missed." But it performed measurably better than an untrained system.
I'm reminded of a SESAME STREET skit in which Ernie watches Bert play checkers with his pet pigeon. Ernie's excited by the amazing accomplishment of a pigeon's learning to play checkers. She must be very smart. Bert counters that she isn't really that smart; in all the games they've played, she's beaten him only twice.
The chief science officer of Cortical Labs, Dr. Brett Kagan, remarks, "The only thing that has 'generalised intelligence' ... are biological brains." Therefore, his team doesn't aim to produce biological computers to "replace the things that the current AI methods do well." Rather, they hope to grow neuron networks able to "infer from very small amounts of data and then make complex decisions" in ways non-biological networks can't.
A related article expands on possible research the brains-in-a-box might be used for. For one, the researchers plan to study the effects of ethanol on their learning abilities -- "to see how alcohol and medicines affect the cells, with Dr Kagan saying they will use ethanol to get them 'drunk' and see whether they play Pong more poorly."
Brain Cells in a DishThe first article above compares these experiments to "brain organoids" -- lentil-size, 3D lab-grown "brains" -- being produced and studied at the University of Queensland. In both cases, scientists make a point of the fact that brains-in-boxes (or dishes) don't have the complexity to possess awareness or emotions. Suppose "eventually, larger networks could experience consciousness or an understanding of their condition," even on a human level? The Frankensteinian ethical implications would be mind-boggling. The morality of turning artificially grown brains into alcoholics, for instance. :)
Of course, present-day neurons on chips capable of simple learning and problem-solving have a long way to go before they might achieve self-awareness and free will. But in the future?
Margaret L. Carter
Please explore love among the monsters at Carter's Crypt.