Don't Get Stuck Using Extinct Thinking

Daniel Kahneman, Nobel Prize Winner, Very Wise Man

Daniel Kahneman, Nobel Prize Winner, Very Wise Man

That's Nobel Prize winner Daniel Kahneman up there along with the cover of his highly successful book, Thinking Fast and Slow. One of the book's many insights is Kahneman's description of the two "systems," or ways of thinking, referenced by the title.

Fast thinking ("System One") is largely unconscious and responsible for us safely navigating the world relatively effortlessly. It's also prone to a set of errors, biases arising from its use of rules of thumb, heuristic shortcuts that enable us to make quick decisions.

Slow thinking ("System Two") is a deliberate approach to solving problems, utilizing logic and other step-by-step procedures.

System One immediately knows that 2 + 2 equals 4 but has no idea what 24 x 17 is. That's a job for System Two.

While System One contains a host of useful information and algorithms ("always pay attention to the source of the siren") it's also home to a host of questionable rules that can lead us astray in the modern world.

Take robots, for example. 

A lot of our ideas about robots come from fictional portrayals of out-of-control creatures. Ever since the villagers broke out the torches and pitchforks to hunt down Dr. Frankenstein's creation, we've been storing up images of doom machines. (Now, machines themselves have been scaring us since the Industrial Revolution, but robots added their extra layer of human resemblance to freak us out even more.)

It's impossible to read a day's worth of robotics news without coming across the word "apocalypse" at least twice.

The ideas we have about robots are our System One at work, helping to keep the world a simple, survivable place by sorting out the scary from the secure. 

Because of these residuals, robots definitely scare System One. 

“System 1 has been shaped by evolution to provide a continuous assessment of the main problems that an organism must solve to survive: How are things going? Is there a threat or a major opportunity? Should I approach or avoid?… situations are constantly evaluated as good or bad, requiring escape or permitting approach” (Kahneman, p. 89)

But, in a world where we're about to be side-by-side with robots doing everything from working with us in a factory to taking care of a loved one in a nursing home, we're going to have to get over those fears.

We're going to have to change our RoboPsych so that we're not unduly alarmed whenever we come across a robot.

Think about your first driving lesson or first airplane flight. Chances are your System One alarm bells were pinging pretty steadily. 

But you learned that you could control the car with those pedals and that wheel and that the folks up front had the plane pretty much figured out.

That's the way it's going to be with robots, too.

You'll find yourself becoming increasingly at ease in their presence and, maybe even developing a relationship with them, like you have with your car.

The thing is, though, we continue to write lots of robot fear stories. Why? Because the possibilities, however remote, are sooooo scary. I mean, the robots could decide they don't want to obey us anymore, right? And, they might become impossible to identify and make bombs and...probably as likely as every airliner forgetting how to fly and plummeting to earth on the same day.

As we begin encountering more and more robots in our everyday lives (the next decade is going to be eye-opening on this count) we'll have to come to terms with our current RoboPsych, our soon-to-be-outdated way of thinking about and emotionally reacting to robots. 

Like our ancestors who didn't grow up with cars and jet planes, we're about to become the first generation to co-habit the world with amazing new intelligent machines. 

And, System One's going to get quite a workout as we make the transition from our current world to that one.