Issue #70: Teaching Robots to be Ethical

*|MC:SUBJECT|*

The RoboPsych Newsletter

Exploring The Psychology of 
Human-Robot Interaction
Issue 70: June 16, 2017


Making Robots Ethical
Quote of the Week:


This virtual school, which goes by the name of GoodAI, specialises in educating artificial intelligences (AIs): teaching them to think, reason and act. GoodAI’s overarching vision is to train artificial intelligences in the art of ethics. “This does not mean pre-programming AI to follow a prescribed set of rules where we tell them what to do and what not to do in every possible situation,” says Marek Rosa, a successful Slovak video-game designer and GoodAI’s founder, who has invested $10m in the company. “Rather, the idea is to train them to apply their knowledge to situations they’ve never previously encountered.”

Experts agree that Rosa’s approach is sensible. “Trying to pre-program every situation an ethical machine may encounter is not trivial,” explains Gary Marcus, a cognitive scientist at NYU and CEO and founder of Geometric Intelligence. “How, for example, do you program in a notion like ‘fairness’ or ‘harm’?” Neither, he points out, does this hard-coding approach account for shifts in beliefs and attitudes. “Imagine if the US founders had frozen their values, allowing slavery, fewer rights for women, and so forth? Ultimately, we want a machine able to learn for itself.”

Rosa views AI as a child, a blank slate onto which basic values can be inscribed, and which will, in time, be able to apply those principles in unforeseen scenarios. The logic is sound. Humans acquire an intuitive sense of what’s ethically acceptable by watching how others behave (albeit with the danger that we may learn bad behaviour when presented with the wrong role models).

 

Teaching Robots Right From Wrong
Simon Parkin

Teaching AI Humanity's Rules


Teaching the simplest ethical rules is very hard; things “everybody knows” from childhood turn out to be devilishly tricky to define exactly.

“You can't take something just because you want it.”

“Wait your turn.”

“Tell the truth.” 


Why are these lessons so hard to teach? Because there are so many situations that call for little “common sense” tweaks. You can take something just because you want it if you're in your kitchen, but not if you're in a friend's. Go ahead and get onto the escalator first if the person in front of you hesitates and steps aside. Even if she asks, don't tell your friend the truth about her surprise birthday party on Saturday.

Children learn these lessons in moment-by-moment interactions with others over the first decade or so of life. But what about AI robots? How should we teach these incredible machines that it's not acceptable to make medical, hiring, or credit decisions because of a person's gender or race?; not OK to target recent widows/widowers for “companionship” ads?; unacceptable to encourage children to tell “little white lies” to their parents to get things they
 want?

AI/robotics researchers are currently using different methods to teach machines these simple lessons. One school of thought wants to start by exposing them to a pre-programmed list of persistent ethical values; others worry that this method is too restrictive and can't possibly account for all the situational variations we face in everyday life. Another wants to adopt a “trial and error” method to engender something akin to “guilt” when a robot causes more harm than had been predicted. This means robots making lots of mistakes and possible damage. Still another wants AIs to read millions of stories about typical human interactions and learn the ethical principles that govern them.

Watching these experiments is fascinating. The stakes are high and the range of approaches wide. The “winning” approaches will be those that can deliver decisions that most (almost never, all) ethicists judge as ethically sound. Those companies that act in accordance with their AI's ethically sound decisions (e.g., demonstrate decision-making transparency) are most likely to gain dominant market share in a world increasingly concerned with principled corporate behavior. In the past, brands could operate in relative obscurity, but decades of Enrons, VWs, and Ubers have brought ethics out of the business school backwater into the full light of balance sheets.

Could it be that mid-21st century corporate ethical behavior will rise to unprecedented levels thanks to the widespread application of faithfully-followed machine-learned principles? If so, that could prove to be one of the least predicted, and most ironic, consequences of a revolution that is sure to touch every aspect of modern life over the next decade.    
 
Tom Guarriello, Ph.D. 

Thanks for subscribing to the RoboPsych Newsletter

If you're not subscribed, please sign up here.  
 
Be sure to subscribe to the RoboPsych Podcast!
Our mailing address is:
tom@RoboPsych.com

unsubscribe from this list    update subscription preferences 
 






This email was sent to *|EMAIL|*
why did I get this?    unsubscribe from this list    update subscription preferences
*|LIST:ADDRESSLINE|*

*|REWARDS|*