Issue #78: RoboHysteria


The RoboPsych Newsletter

Exploring The Psychology of 
Human-Robot Interaction
Issue 78: October 3, 2017

Quote of the Week:


“We are surrounded by hysteria about the future of Artificial Intelligence and Robotics. There is hysteria about how powerful they will become how quickly, and there is hysteria about what they will do to jobs.


As I write these words on September 2nd, 2017, I note just two news stories from the last 48 hours.


Yesterday, in the New York Times, Oren Etzioni, chief executive of the Allen Institute for Artificial Intelligence, wrote an opinion piece titled How to Regulate Artificial Intelligence where he does a good job of arguing against the hysteria that Artificial Intelligence is an existential threat to humanity. He proposes rather sensible ways of thinking about regulations for Artificial Intelligence deployment, rather than the chicken little “the sky is falling” calls for regulation of research and knowledge that we have seen from people who really, really, should know a little better.


Today, there is a story in Market Watch that robots will take half of today’s jobs in 10 to 20 years. It even has a graphic to prove the numbers.


The claims are ludicrous. [I try to maintain professional language, but sometimes…]

For instance, it appears to say that we will go from 1 million grounds and maintenance workers in the US to only 50,000 in 10 to 20 years, because robots will take over those jobs. How many robots are currently operational in those jobs? ZERO. How many realistic demonstrations have there been of robots working in this arena? ZERO. Similar stories apply to all the other job categories in this diagram where it is suggested that there will be massive disruptions of 90%, and even as much as 97%, in jobs that currently require physical presence at some particular job site. Mistaken predictions lead to fear of things that are not going to happen. Why are people making mistakes in predictions about Artificial Intelligence and robotics, so that Oren Etzioni, I, and others, need to spend time pushing back on them?


Rodney Brooks,
The Seven Deadly Sins of Predicting the Future of AI”

Are We Done Freaking Out Yet?  

Rodney Brooks is not a happy roboticist!

This issue's Quote of the Week pretty well sums up Brooks' thoughts and feelings about the collective hysteria we've created about how robots and AI will impact employment, and maybe even conspire to wipe us out.

No more jobs! Existential threat!! 

Brooks had to resort to the (unprofessional!) word “ludicrous” to grab our attention.

Of course, Brooks is not only one of robotics' academic pioneers at MIT but also the founder of iRobot and Rethink Robotics. iRobot originated the Roomba and Rethink created Baxter and Sawyer, two of the most productive industrial robots now in use.

If anybody should know about what robots are capable of, it's Brooks. And he thinks we're freaking out due to our misunderstanding of some fundamental principles that underlie our capacity to produce AI-powered robots. 

Brooks' seven deadly sins of predicting the future of AI are as follows:
  1. Over and Under Estimating - This “sin” reflects Brooks' support of futurist Ray Amaro's now famous quote: “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” Brooks cites GPS as a perfect example of this principle and sees AI doom predictions as another.
  2. Imagining Magic - Arthur C. Clarke's insight that, “any sufficiently advance technology is indistinguishable from magic” grounds Brooks' thinking. He says, “when a technology passes that magic line anything one says about it is no longer falsifiable, because it is magic. This is a problem I regularly encounter when trying to debate with people about whether we should fear just plain AGI, let alone cases C or D from above. I am told that I do not understand how powerful it will be. That is not an argument. We have no idea whether it can even exist.” This is like trying to argue with someone who believes in werewolves.
  3. Performance vs. Competence - We humans generalize abilities. If someone is smart in one area, say medicine, we tend to assume she will be smart in others, like investing. Since we now regularly interact with AI systems that can perform a difficult task, like identifying a Frisbee in a photograph, we are prone to believing that that system will also be able to do other things, like tell us how to throw a Frisbee. Not true. Generalizing from performance to competency is a big mistake.
  4. Suitcase Words - When we hear words like “learning,” “intelligence,” or “consciousness” applied to AI, we instinctively jump to ascribing human-level meaning to the terms. But experts will tell us (if they're not too strongly influenced by the opportunity for a sensational headline or article) that machines don't learn, exhibit intelligence, or experience consciousness in anything like the way humans do. So, ascribing our own examples of these capabilities to machines is both natural and deeply mistaken.
  5. Exponentials - We've all seen the Moore's Law charts that plot the historical growth and trajectory in transistor density. Since then, we've become very comfortable speaking in exponential terms. Brooks thinks we should beware. Many “exponential” phenomena are actually S-curves that plateau and may, or may not, continue to grow. He says: “so when you see exponential arguments as justification for what will happen with AI remember that not all so called exponentials are really exponentials in the first place, and those that are can collapse suddenly when a physical limit is hit, or there is no more economic impact to continue them.” Much of AI hysteria is grounded in belief in exponential growth in both hardware and software.
  6. Hollywood Scenarios - Face it, stories like Frankenstein, Terminator, and Blade Runner deeply affect our thinking about AI and robots. Mostly, this effect is unconscious. Say “robot” and lots of people think “killer....” These scenarios have as much connection with the realities of AI's future as The Martian does with interplanetary travel.
  7. Speed of Deployment - Software updates happen relatively quickly. Hardware, not so much. And the kinds of AI scenarios we see in many imagined situations demand lots of new hardware. As Brooks says, “A lot of AI researchers and pundits imagine that the world is already digital, and that simply introducing new AI systems will immediately trickle down to operational changes in the field, in the supply chain, on the factory floor, in the design of products. Nothing could be further from the truth. The impedance to reconfiguration in automation is shockingly mind-blowingly impervious to flexibility.”
 Brooks believes that every doomsday scenario we see today is founded in these “sins.” He says: “In my experience one can always find two or three or four of these problems with their arguments.”

So, the next time you see a shocking prediction of millions of jobs being eliminated in the next five years, or robot overlords treating humans like pets, take a step back and see if the claim can pass Brooks' “sins” test. There's a good chance we're seeing still another example of overheated hype and not the end of the world as we know it.
Tom Guarriello, Ph.D. 

Thanks for subscribing to the RoboPsych Newsletter

If you're not subscribed, please sign up here.  
Be sure to subscribe to the RoboPsych Podcast!
Our mailing address is:

unsubscribe from this list    update subscription preferences 

This email was sent to *|EMAIL|*
why did I get this?    unsubscribe from this list    update subscription preferences