Issue #75: Robots Don't Kill People; People With Robots Kill People

*|MC:SUBJECT|*

The RoboPsych Newsletter

Exploring The Psychology of 
Human-Robot Interaction
Issue 75: August 21, 2017


Keeping Humans in Killing
Quote of the Week:


“As companies building the technologies in Artificial Intelligence and Robotics that may be repurposed to develop autonomous weapons, we feel especially responsible in raising this alarm.

We warmly welcome the decision of the UN’s Conference of the Convention on Certain Conventional Weapons (CCW) to establish a Group of Governmental Experts (GGE) on Lethal Autonomous Weapon Systems. Many of our researchers and engineers are eager to offer technical advice to your deliberations.

We commend the appointment of Ambassador Amandeep Singh Gill of India as chair of the GGE. We entreat the High Contracting Parties participating in the GGE to work hard at finding means to prevent an arms race in these weapons, to protect civilians from their misuse, and to avoid the destabilizing effects of these technologies.

We regret that the GGE’s first meeting, which was due to start today, has been cancelled due to a small number of states failing to pay their financial contributions to the UN. We urge the High Contracting Parties therefore to double their efforts at the first meeting of the GGE now planned for November.

Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.

We therefore implore the High Contracting Parties to find a way to protect us all from these dangers.”

 

An Open Letter to the United Nations
Convention on Certain Conventional Weapons


 
Robots Don't Kill People, People With Robots Kill People 

Killer robots.

The two words go together so naturally that it's easy to forget that they only go together in science fiction. But, Elon Musk and 115 other luminaries today released a letter urging the UN to establish a special group to “protect us from the dangers” of the “lethal autonomous weapon systems” (LAWS) that threaten to become the “third revolution” (after gunpowder and nuclear weapons) in warfare.

The prospect of that wicked looking little tank pictured above zipping around the desert, the mountains, or a city, scanning for hostiles and killing them on sight is an image that's haunted our fiction-fueled-dreams for about a hundred years.

So, it makes sense to ban “killer robots,” right? 

Sure...but...

Earlier this week, I was fortunate enough to spend a couple of days with a group of people who are dedicated to a cause few of us have spent much time thinking about...until the last few weeks, that is. These are folks who spend their lives working on nuclear arms control; some on complete nuclear weapon elimination. Kim-Jong un or not, these people work daily to prevent the horror of nuclear weapon use.

Do you know how many nuclear weapons the nine nuclear powers in the world possess right now?

(Bonus: can you name the nine countries?)

Go ahead...think of a number...I'll wait.

Got it?

~15,000

How close was your estimate? Most of us think the number is much smaller. 

The folks who are calling for banning killer robots...are they equally troubled by that number? Somehow, the massive arsenal held by (spoiler alert!) the United States, United Kingdom, Russia, France, China, Pakistan, India, North Korea, and Israel exists within a belief system of “mutually assured destruction” (MAD). No nation, the thinking goes, will use nuclear weapons against another nation which possesses them because the potential for a devastating counter-strike is too high. 

Now, saying that LAWS would be a “revolution” on a par with nuclear weapons sounds logical. Until you think about it for a minute. Yes, LAWS firing high calibre ammunition or dropping precision guided missiles from drones is horrifying, but their destructive power is puny compare with that of even one of those 15,000 nuclear warheads. And, LAWS whose “kill decisions” are controlled remotely by a human operator sound more reassuring than one with no human in the loop, until we remember there is always a human in the loop! 

Nonsensical fantasies aside, LAWS could only be put into battle situations by people...they will not decide on their own to travel to a foreign land, select targets, and begin eliminating them. Every step of the way, human operators will make decisions, right up to the moment that the robot executes a kill algorithm written by human programmers.

Lately, we've spent lots of time thinking about ethical responsibilities in far-fetched autonomous vehicle scenarios, ultimately deciding that the vehicle itself cannot be held accountable for any injuries or damages it causes. Only people can be accountable.

Same for LAWS. There's no reason to believe that the Uniform Code of Military Justice won't hold the commander of a LAWS unit that destroys a non-combatant village just as accountable as it would the commander of an Army company. Weapon systems do not obviate command responsibility.  

That isn't to say that we shouldn't strive to put LAWS in the same category as chemical, biological, and nuclear weapons. 

But while we're wringing our hands worried about images of a battalion of rogue Terminators, let's not forget that we are faced daily with a world whose safety rests on a foundation of pure MADness. The sooner we can eliminate the very real threat of those weapons...the weapons that really do pose an existential threat to our species and vast swaths of life on the planet...the sooner we can turn our attention to other categories of destructiveness which might currently be more titillating, but which in the end are just the latest extension of our readiness to kill one another with our newest technological innovation.

Clever sleights of Pandora's hand aside, lethal weapons will never be autonomous; they will always have their creators‘ fingerprints all over them. Let's not make believe it could ever be otherwise.    

 
Tom Guarriello, Ph.D. 

Thanks for subscribing to the RoboPsych Newsletter

If you're not subscribed, please sign up here.  
 
Be sure to subscribe to the RoboPsych Podcast!
Our mailing address is:
tom@RoboPsych.com

unsubscribe from this list    update subscription preferences 
 






This email was sent to *|EMAIL|*
why did I get this?    unsubscribe from this list    update subscription preferences
*|LIST:ADDRESSLINE|*

*|REWARDS|*