Issue #73: The Simple Things In Life

*|MC:SUBJECT|*

The RoboPsych Newsletter

Exploring The Psychology of 
Human-Robot Interaction
Issue 73: July 24, 2017


Simple, But Not Easy
Quote of the Week:

 

Robots can do all kinds of things that humans just can’t.

They can sit in the same place for years and weld car parts together with a level of precision that no human can faithfully reproduce. Some smart machines can speak dozens of languages. Others can fly hundreds of feet in the air, surveilling life down below.

But sometimes the biggest challenges for robots are things that we humans take for granted. Like, say, not falling into decorative fountains.

On Monday, a robot security guard from the California-based startup Knightscope fell in a fountain outside of a Washington, D.C., office building. The wheeled robot was making its rounds when it encountered a common feature of manmade environments it’s not built to handle: a fountain with stairs.

...

Presumably, the robot is equipped with computer vision software that should be able to detect obstacles, like bodies of water, or cars. But clearly, its smart eyes didn’t quite work in this instance, demonstrating how difficult it is for robots to navigate a world built for humans.


A Robot Security Guard Rolled Its Way Into A Fountain"
April Glaser, Slate


 
It's The Simple Things In Life   


How old were you when you learned not to try to walk on water?

Crazy question, right? Who doesn't know that you can't do that?  

But, apparently, this Knightscope security robot at a D.C. office building tried to do just that last week with the result that 99.999% of us would have predicted. It fell in a fountain. 

It's the kind of incident that should remind us of a very important fact about even the “smartest” AI-equipped robots: it's the simple things in life that they're clueless about.

The robot's maker, Knightscope, claims that the system is equipped with “advanced anomaly detection,” “autonomous presence,” “live 360 degree video streaming,” and, coming soon (!), “gun detection.”

Cool! 

But, yet, apparently, if the sun reflects off the surface of a fountain in just the right way, it will merrily roll along right into the drink. Kind of like that Tesla did in Florida, in a way.

Twitter wags were quick to proclaim the incident a “suicide” and posted photos of the memorial that people created at the bot's charging station. 

 
A couple of our most fundamental cognitive tendencies are on display in this incident.

First off, we humans “know” a lot more about the world than we know we know. I mean, we know we can't walk on water but if somebody asked you to make a list of all the things you know, chances are pretty good that you wouldn't put “no water walking” on the list.

It's just “common sense.”

“No water walking” is part of what phenomenological philosopher Alfred Schutz called our “stock of knowledge.” This is the vast collection of tacit, unconscious facts, about the world that makes up the infrastructure of our everyday lives. Without this lived knowledge, every moment would be filled with an exhaustingly complex set of conscious decisions. You know: let go of an object: it falls. Wave a pedestrian across a street: stop driving until she crosses. Answer a question with a shrug: ask me something else.

Robots, on the other hand, only know what we tell them (and, recently, what we “train” them to know through structured learning models.) But there's no way we can train them to know everything we know.

Hell, we don't even know everything we know, so how could we? 


This means that robots will be teaching us things we “know” by doing things like rolling into fountains for the foreseeable future. It's like realizing your child doesn't know that people in England drive on the other side of the road when she steps off a London curb without first looking to the right. Robots will be surprising us with their “stupidity” for years to come. 

Then there's the “memorial.”

I know, I know...people were just “kidding around” when they placed their cards, flowers, and photos. And yet, Oliver Griswold's tweet captured something important about our cognitive operations when he observed: “The future is weird.”

The sight of this barely humanoid (more like, R2D2oid) robot laying in the fountain almost instantly evoked a slew of remarkably consistent anthropomorphized reactions: the robot was “bored,” or “despondent,” “hated its job,” or “overheard some horrible news” and decided to “end it all.”

Moments like this give us a glimpse into just how different many parts of the future will be from the present...how strange it will be to interact with systems that evoke odd emotional reactions while highlighting aspects of our world that we take for granted.

Robots will make us feel both inferior and superior to them at the same time. 


It will be the simplest everyday things that they won't be able to master, things that we'll wryly mock, that will allow us to maintain our comfort in our species' exceptionalism as AI comes to outperform us in one domain after another.

But, sooner or later, that means that we'll have to figure out what's so special about being human and resign ourselves to the odd realities of our very, very weird future. 

 

Tom Guarriello, Ph.D. 

Thanks for subscribing to the RoboPsych Newsletter

If you're not subscribed, please sign up here.  
 
Be sure to subscribe to the RoboPsych Podcast!
Our mailing address is:
tom@RoboPsych.com

unsubscribe from this list    update subscription preferences 
 






This email was sent to *|EMAIL|*
why did I get this?    unsubscribe from this list    update subscription preferences
*|LIST:ADDRESSLINE|*

*|REWARDS|*