RoboPsych Newsletter Issues Archive

*|MC:SUBJECT|*

The RoboPsych Newsletter

Exploring The Psychology of 
Human-Robot Interaction
Issue 64: March 8, 2017


Humans: A Sci-Fi Vision Of 2030
Quote of the Week:
Spoiler Alert!

 
  • Interrogator: What do you feel about being kept here?
     

  • Niska: I volunteered to be here. You promised that if I could prove my consciousness, I would be given the chance of a fair trial.
     

  • Laura: What would you do with your freedom, Niska?
     

  • Niska: I don't know. Isn't that the point of freedom? You can do anything with it. Or nothing.
     

  • Interrogator: What do you feel about us humans?
     

  • Niska: You can be loving, you can be kind, you can be cruel. You're always trying to kill each other.
     

  • Interrogator: Why do you think that is?
     

  • Niska: Because there are too many of you, and your lives are very short. You all have to die. You're here one minute, gone the next. If that wasn't the case, maybe you'd be nicer to each other. Maybe you'd be nicer to us.


Humans, Season 2, Episode 3


 
What's It Like When Ubiquitous Robots Become Conscious? 

Science fiction has helped us to imagine the future for millennia. That's its #1 job

So, when AMC launched its futuristic series, Humans, last year, the premise was clear: this is what our world will look like in the near future when we are surrounded by human-looking AI-equipped robots, here called "synthetics" (or, synths, for short). And then, what would happen if a small group of those synths became conscious, feeling creatures?

Major Humans spoilers ahead!

Now, halfway through Season 2, the show's writers and producers are presenting some knotty issues for us to consider.

Take Niska, pictured above, for example. Two sentences from her bio on the AMC website sum up a large part of her backstory:

Volatile Niska had come to distrust and hate humans. This culminated in her murder of an abusive client at the Synth brothel where she was forced to work.

That act led to the scene quoted above, in which Niska is fighting for the right to be tried for murder as a human. To earn this right, she has to pass a sophisticated version of the Turing Test to demonstrate not just cognitive abilities but consciousness and emotions as well.

Lucky Niska. If she “passes” the test, she gets to be tried for murder as a human. If she “fails” she gets destroyed as a malfunctioning machine.

A little backstory. Niska is one of a small band of synths made conscious by a group of singularity-minded scientists/coders. When the scientists' “rogue code” is uploaded into one of the seemingly millions of “standard synth units” in the Humans universe, that unit emits a re-boot tone and opens its eyes widely, as if awakening from a dream. These scenes very effectively signal that this unit has now passed over to a new kind of existence. 

Halfway through Season 2, more and more synths are awakening daily. 

The show explores the implications of a world rich in AI robots living widely varying lives. Some are simply worker-bees: sweeping up, carrying barrels of chemicals, handing out leaflets on the street. As her bio reveals, Niska was assigned as a sex worker in a synth brothel, where she was subjected to abuse and cruelty that wouldn't surprise anyone familiar with that industry. One client tried to force her to act like a child so that he could experience raping her that way; Niska objected, crushing his windpipe, killing him instantly. 

Her campaign to be tried as a human is initially met with scoffing incredulity by the British court system (the series originated in England and is primarily set there). However, Niska's lawyer, Laura (who knows Niska very well from Season 1) pushes for a Turing-like determination of her mental status so that she might be given a full human trial weighing the justifiability of the murder.

This possibility is not as far-fetched as it may seem. Just as municipalities, universities, or corporations have been granted “legal personhood,” the EU last year determined that if a machine can be shown to think and feel, it should be afforded a special status: “electronic personhood.” In the opinion paper, the EU recommended:

creating a specific legal status for robots, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons with specific rights and obligations, including that of making good any damage they may cause, and applying electronic personality to cases where robots make smart autonomous decisions or otherwise interact with third parties independently;

Under these provisions, corporations wouldn't be responsible for damages caused by robots they'd manufactured; the robots themselves would. That means robot insurance would become a big thing, fast. 

It also means that fictional scenarios like Niska's would not be so bizarre. Justice systems worldwide would be faced with unprecedented cases to untangle. 

That's just one of the many issues Humans raises. One character is actively working on “consciousness transference,” a Kurzweilian way to upload an individual's entire mental life into a machine, or another body. Another learns that he's been “made redundant” in his job through a series of decisions made entirely by synths, without any humans in the loop. Still another, a synth, finds that she's fallen in love wth a human and decides to try to find happiness with him.

While it's easy to scoff at these plot lines, it's also fruitful to explore what they reveal about possibilities that are quickly coming at us from just over the horizon. As the EU document shows, we're closer to having to make many of these decisions than we may think. And, the unique psychological challenges presented by living with an entirely new species would be staggering.

That's why I've been doing an episode-by-episode commentary on Season 2 of Humans with fellow psych tech geek Josué Cardona on the RoboPsych Podcast. If you're interested, you can find our conversations here. Let us know what you think as you too consider what happens when thinking, feeling, behaving robots challenge every notion of ourselves that we've developed over hundreds of thousands of years of human evolution. 


 
Tom Guarriello, Ph.D. 

Thanks for subscribing to the RoboPsych Newsletter

If you're not subscribed, please sign up here.  
 
Be sure to subscribe to the RoboPsych Podcast!
Our mailing address is:
tom@RoboPsych.com

unsubscribe from this list    update subscription preferences 
 






This email was sent to *|EMAIL|*
why did I get this?    unsubscribe from this list    update subscription preferences
*|LIST:ADDRESSLINE|*

*|REWARDS|*