Intelligence: It's Not All In Your Head
Quote of the Week:
AI theorists stopped treating the human body as an overwhelming problem to be set aside and started treating it as an irrelevant matter to be ignored. Today the mainstream argues that there is no meaningful difference between the human brain, with its networks of neurons and axons--electrical and chemical on-off switches--and computers powered by 1s and 0s. And by the same analogy, computer scientists understand the human mind to be the equivalent of software running on the brain-computer.
Whatever differences exist between humans and machines, today's gurus of artificial intelligence argue it will vanish in the not-too-distant future. Human minds, their memories and personalities, will be downloadable to computers. Human brains, meanwhile, will become almost infinitely upgradable, by installing faster hardware and the equivalent of better apps. The blending of human and machine, which Google's Ray Kurzweil calls the Singularity, may be less than 30 years off, they theorize.
David Gelernter isn't buying it. The question of the body must be faced, and understood, he maintains. "As it now exists, the field of AI doesn't have anything that speaks to emotions and the physical body, so they just refuse to talk about it," he says. "But the question is so obvious, a child can understand it. I can run an app on any device, but can I run someone else's mind on your brain? Obviously not."
In Gelernter's opinion, we already have a most singular form of intelligence available for study--the one that produced Bach and Shakespeare, Jane Austen and Gandhi--and we scarcely understand its workings. We're blundering ahead in ignorance when we talk about replacing it.
As we become increasingly technologically savvy we seem to be more and more fascinated by what happens in people from the neck up.
Artificial intelligence is, after all, engaged in a headlong chase (sorry!) to replicate human cognitive operations in computer hardware-software systems. Computer scientists and neuroscientists eye one another's work for clues about ways to create machines that mimic the brain's operations. They create machine learning, reinforcement learning, and every other flavor of learning algorithms to defeat humans in chess, Go, poker, box stacking, welding, and burger flipping.
Just don't ask them to open a door they've never seen before. Or make a hot dog instead of a burger. Or to play Parcheesi for a change. Oh, they might be able to do so after tweaking the algorithm and creating a practice data set. But not nearly as quickly as your 10 year-old.
That's because the level of “intelligence” that came with your 10 year-old's body at birth far outstrips anything that we've seen in even the most sophisticated robots. No tabula rasa there. To understand the intelligence-enabling equipment that comes standard with each human infant means examining one of the most vexing philosophical puzzles that humans have grappled with for thousands of years: the mind-body problem.
If you ask the average Jane or Joe where her/his mind is, chances are you'll get an answer that looks something like the image posted above: “it's in my head, of course.” We know our bodies are necessary for us to have a mind (at least until the great Singularity). But, Descartes' cogito ergo sum demoted it to the role of bit player in determining what it means to be human. The body is a machine for Descartes and his like-minded descendants, not unlike those of the mechanical automata he saw in exhibitions all over Paris in the early 17th century.
Yes, that's right: robots inspired Descartes' thinking about the mind-body relationship.
And like some bizarre version of Frankenstein, we've been trying to bring the body back to life ever since.
Of course, not everyone agreed with Descartes. A school of philosophy, and later psychology, argued that the conscious mind emerges from the complex human body-brain system. The lived-body defines humanity for these theorists, not a disembodied cogito, a consciousness performing complex brain computations.
We know that the whole body is involved in our most fundamental behaviors: emotions. Darwin demonstrated the links between our emotions and those of our evolutionary ancestors in some of his later work. We now define an emotion as a reaction comprised of integrated cognitive, affective, and physiological elements. Anyone who's ever experienced fear knows the conscious experience is enveloped in the heart pumping, palm-sweating, eyes-wide-open alert system that the body instantly becomes.
Fear isn't just an idea, it's a bodily experience. And, what happens to your mind at that moment? Increased environmental awareness, fight or flight planning, contingency assessment...all these “intelligent” contents...flood your consciousness, unbidden, triggered by the intertwined operations and secretions of a dozen glands and other perceiving, processing, and actuating organs responding to a holistic threat.
That's the lived experience of fear: a system-wide mind-body reaction to perceived danger.
To be clear: artificial intelligence systems are nowhere near having the capability to recreate such experiences in robots. Until they do, a self-driving vehicle, for example, will never truly appreciate the experience of danger we feel when skidding on a wet highway. And, while that experience may not be necessary to pull the car out of the skid, it would be very valuable in understanding the reactions of passengers in a vehicle that's just experienced a “close call.” Indeed, the very experience of a close call would be difficult to get across to a robot.
When you think about the complex array of harmoniously tuned systems that keep human consciousness running, the idea that we will be able to extract a “mind” from a body and upload it into a mechanical robot any time soon seems very far-fetched.
But, if we are ever to do so, we'll first have to learn to at least simulate the most complex environment we've yet encountered: the integrated whole that is the lived human body-mind unity.
Tom Guarriello, Ph.D.
Thanks for subscribing to the RoboPsych Newsletter
Humans: A Sci-Fi Vision Of 2030
Quote of the Week:
Interrogator: What do you feel about being kept here?
Niska: I volunteered to be here. You promised that if I could prove my consciousness, I would be given the chance of a fair trial.
Laura: What would you do with your freedom, Niska?
Niska: I don't know. Isn't that the point of freedom? You can do anything with it. Or nothing.
Interrogator: What do you feel about us humans?
Niska: You can be loving, you can be kind, you can be cruel. You're always trying to kill each other.
Interrogator: Why do you think that is?
Niska: Because there are too many of you, and your lives are very short. You all have to die. You're here one minute, gone the next. If that wasn't the case, maybe you'd be nicer to each other. Maybe you'd be nicer to us.
What's It Like When Ubiquitous Robots Become Conscious?
Science fiction has helped us to imagine the future for millennia. That's its #1 job.
So, when AMC launched its futuristic series, Humans, last year, the premise was clear: this is what our world will look like in the near future when we are surrounded by human-looking AI-equipped robots, here called "synthetics" (or, synths, for short). And then, what would happen if a small group of those synths became conscious, feeling creatures?
Major Humans spoilers ahead!
Now, halfway through Season 2, the show's writers and producers are presenting some knotty issues for us to consider.
Take Niska, pictured above, for example. Two sentences from her bio on the AMC website sum up a large part of her backstory:
Volatile Niska had come to distrust and hate humans. This culminated in her murder of an abusive client at the Synth brothel where she was forced to work.
That act led to the scene quoted above, in which Niska is fighting for the right to be tried for murder as a human. To earn this right, she has to pass a sophisticated version of the Turing Test to demonstrate not just cognitive abilities but consciousness and emotions as well.
Lucky Niska. If she “passes” the test, she gets to be tried for murder as a human. If she “fails” she gets destroyed as a malfunctioning machine.
A little backstory. Niska is one of a small band of synths made conscious by a group of singularity-minded scientists/coders. When the scientists' “rogue code” is uploaded into one of the seemingly millions of “standard synth units” in the Humans universe, that unit emits a re-boot tone and opens its eyes widely, as if awakening from a dream. These scenes very effectively signal that this unit has now passed over to a new kind of existence.
Halfway through Season 2, more and more synths are awakening daily.
The show explores the implications of a world rich in AI robots living widely varying lives. Some are simply worker-bees: sweeping up, carrying barrels of chemicals, handing out leaflets on the street. As her bio reveals, Niska was assigned as a sex worker in a synth brothel, where she was subjected to abuse and cruelty that wouldn't surprise anyone familiar with that industry. One client tried to force her to act like a child so that he could experience raping her that way; Niska objected, crushing his windpipe, killing him instantly.
Her campaign to be tried as a human is initially met with scoffing incredulity by the British court system (the series originated in England and is primarily set there). However, Niska's lawyer, Laura (who knows Niska very well from Season 1) pushes for a Turing-like determination of her mental status so that she might be given a full human trial weighing the justifiability of the murder.
This possibility is not as far-fetched as it may seem. Just as municipalities, universities, or corporations have been granted “legal personhood,” the EU last year determined that if a machine can be shown to think and feel, it should be afforded a special status: “electronic personhood.” In the opinion paper, the EU recommended:
creating a specific legal status for robots, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons with specific rights and obligations, including that of making good any damage they may cause, and applying electronic personality to cases where robots make smart autonomous decisions or otherwise interact with third parties independently;
Under these provisions, corporations wouldn't be responsible for damages caused by robots they'd manufactured; the robots themselves would. That means robot insurance would become a big thing, fast.
It also means that fictional scenarios like Niska's would not be so bizarre. Justice systems worldwide would be faced with unprecedented cases to untangle.
That's just one of the many issues Humans raises. One character is actively working on “consciousness transference,” a Kurzweilian way to upload an individual's entire mental life into a machine, or another body. Another learns that he's been “made redundant” in his job through a series of decisions made entirely by synths, without any humans in the loop. Still another, a synth, finds that she's fallen in love wth a human and decides to try to find happiness with him.
While it's easy to scoff at these plot lines, it's also fruitful to explore what they reveal about possibilities that are quickly coming at us from just over the horizon. As the EU document shows, we're closer to having to make many of these decisions than we may think. And, the unique psychological challenges presented by living with an entirely new species would be staggering.
That's why I've been doing an episode-by-episode commentary on Season 2 of Humans with fellow psych tech geek Josué Cardona on the RoboPsych Podcast. If you're interested, you can find our conversations here. Let us know what you think as you too consider what happens when thinking, feeling, behaving robots challenge every notion of ourselves that we've developed over hundreds of thousands of years of human evolution.
Tom Guarriello, Ph.D.
Thanks for subscribing to the RoboPsych Newsletter