Issue #74: Cute, Very Cute

*|MC:SUBJECT|*

The RoboPsych Newsletter

Exploring The Psychology of 
Human-Robot Interaction
Issue 74: August 7, 2017


Cute, Very Cute
Quote of the Week:


“Nearly every step wrought havoc upon the prototype walker's frame. Designed to activate landmines in the most direct means possible, the EOD robot was nevertheless persistent enough to pick itself back up after each explosion and hobble forth in search of more damage. It continued on until it could barely crawl, its broken metal belly scraping across the scorched earth as it dragged itself by a single remaining limb. The scene proved to be too much for those in attendance. The colonel in charge of the demonstration quickly put an end to the macabre display, reportedly unable to stand the scene before him. “This test, he charged, was inhumane,” according to the Washington Post.

But how can this be? This was a machine, a mechanical device explicitly built to be blown up in a human's stead. We don't mourn the loss of toasters or coffeemakers beyond the inconvenience of their absence, so why should a gangly robotic hexapod generate any more consternation than a freshly squashed bug? It comes down, in part, to the mind's habit of anthropomorphizing inanimate objects. And it's this mental quirk that could be exactly what humanity needs to climb out of the uncanny valley and begin making emotional connections with the robots around us.

These sorts of emotional connections come more easily in military applications, where soldiers’ lives depend on these devices working as they should. “They would say they were angry when a robot became disabled because it is an important tool, but then they would add ‘poor little guy,’ or they'd say they had a funeral for it,” Dr. Julie Carpenter of the University of Washington wrote in 2013. “These robots are critical tools they maintain, rely on, and use daily. They are also tools that happen to move around and act as a stand-in for a team member, keeping Explosive Ordnance Disposal personnel at a safer distance from harm.”

Andrew Tarantola
Engadget


 
Cute Mind Games 

Isn't that the cutest robot you've ever seen up there in that picture?

Cute. Nature has been working on our design for millions of years, slowly making untold minute changes in both the genes that eventually replicate into the patterns that create human bodies and their capabilities, as well as the environments in which those gene-productions will find themselves.

Can you imagine a time before “cute” existed? When looking at that picture didn't bring a reflexive “awww” to your lips? 

How about this one?



It turns out that the same evolutionary mechanisms that make that puppy irresistible are also at work when we're interacting with robots. 

And, that's no accident. See, we know a lot about “cute.” Biologists have been researching the characteristics that most of us call “cute” for a long time, going back to biologist Konrad Lorenz's work in the late 19th century. Here's how he distinguished cute from not cute. Can you tell which is which?



Of course you can. Cute = images on left; not-cute = images on right.

We've designed objects with cute features for centuries. Why? Because nature has pre-loaded biological mechanisms into our DNA that greatly enhance the probability that we will form attachments with, and nurture, organisms that fit the “cute” mold. (See, human infants.)

We just can't help ourselves. 

Enter robots. 

As soon as we began to imagine artificial humanoid life forms we were forced to confront the question of their cuteness. How cute should they be? That depended on the story you wanted to tell.

Want to tell a tale about the dangers of humans stepping into the divine job of creating life? Sounds like a job for not cute.



How about one about a boy who finds a robot that helps him find his life's purpose after his brother dies? Definitely calls for cute.

 

Now, all those psycho-physical mechanisms that evolved over the years automatically kick in the very second we perceive an object that resembles a human in even the remotest ways. Psychological experiments show that we will attribute human characteristics to anything...really, anything!...that looks or acts like a human in even the remotest ways.

Don't believe me? Go watch this short video. Now, tell someone what you saw. What did you say? Did you describe several geometric shapes moving around a confined space in precise engineering terms? Or did you say something else? Perhaps something about “characters” playing out an ancient tale?   

Even when robot designers do their best to minimize their machines' human resemblance, we humans will find some feature that triggers our anthropomorphic instincts...especially if that machine has even a scintilla of cuteness.  

Well, so what? So cute robots will tug on our heartstrings a little. Big deal.

But, is cute always desirable? Are there times when cute might be a detriment? 

This week's Quote contains a clue to that answer. When a robot's purpose is to perform some dull, dirty, or dangerous task (like removing battlefield Explosive Ordnance Devices; EODs) the last thing we want kicking in are cuteness-triggered nurturing instincts. The quoted colonel's comment that the EOD removal test was “inhumane” is a dead giveaway that this robot was a bit too cute for the task at hand. 

All of which is to say that our desire to create robots with which we will establish emotional connections is not without peril. My podcast co-host, Dr. Julie Carpenter has written that military personnel have, in fact, actually put themselves in danger due to reluctance to send EOD destroying robots into “harm's way.” Then there's the question of inappropriate emotional connections between robots and children. There's little doubt that increasingly capable, cute home robots will become important companions for young children (as well as the elderly). How will we manage these relationships so that people do not make inappropriate decisions based on the power of cuteness?

Robots are going to enter our homes, much as domesticated dogs did centuries ago. Like dogs, robots will require care, and owning them will bring with it both the joys of companionship and some measure of emotional (and physical) risk. It is imperative that robot designers take their responsibilities seriously since they risk triggering some of our deepest evolutionary mechanisms with just the shape of a face, or the “longing” looks of slightly oversized “soulful” eyes.

An object's integrated form/function has rarely been as important it will be in our soon-to-arrive social robots. 

Tom Guarriello, Ph.D. 

Thanks for subscribing to the RoboPsych Newsletter

If you're not subscribed, please sign up here.  
 
Be sure to subscribe to the RoboPsych Podcast!
Our mailing address is:
tom@RoboPsych.com

unsubscribe from this list    update subscription preferences 
 






This email was sent to *|EMAIL|*
why did I get this?    unsubscribe from this list    update subscription preferences
*|LIST:ADDRESSLINE|*

*|REWARDS|*

Issue #73: The Simple Things In Life

*|MC:SUBJECT|*

The RoboPsych Newsletter

Exploring The Psychology of 
Human-Robot Interaction
Issue 73: July 24, 2017


Simple, But Not Easy
Quote of the Week:

 

Robots can do all kinds of things that humans just can’t.

They can sit in the same place for years and weld car parts together with a level of precision that no human can faithfully reproduce. Some smart machines can speak dozens of languages. Others can fly hundreds of feet in the air, surveilling life down below.

But sometimes the biggest challenges for robots are things that we humans take for granted. Like, say, not falling into decorative fountains.

On Monday, a robot security guard from the California-based startup Knightscope fell in a fountain outside of a Washington, D.C., office building. The wheeled robot was making its rounds when it encountered a common feature of manmade environments it’s not built to handle: a fountain with stairs.

...

Presumably, the robot is equipped with computer vision software that should be able to detect obstacles, like bodies of water, or cars. But clearly, its smart eyes didn’t quite work in this instance, demonstrating how difficult it is for robots to navigate a world built for humans.


A Robot Security Guard Rolled Its Way Into A Fountain"
April Glaser, Slate


 
It's The Simple Things In Life   


How old were you when you learned not to try to walk on water?

Crazy question, right? Who doesn't know that you can't do that?  

But, apparently, this Knightscope security robot at a D.C. office building tried to do just that last week with the result that 99.999% of us would have predicted. It fell in a fountain. 

It's the kind of incident that should remind us of a very important fact about even the “smartest” AI-equipped robots: it's the simple things in life that they're clueless about.

The robot's maker, Knightscope, claims that the system is equipped with “advanced anomaly detection,” “autonomous presence,” “live 360 degree video streaming,” and, coming soon (!), “gun detection.”

Cool! 

But, yet, apparently, if the sun reflects off the surface of a fountain in just the right way, it will merrily roll along right into the drink. Kind of like that Tesla did in Florida, in a way.

Twitter wags were quick to proclaim the incident a “suicide” and posted photos of the memorial that people created at the bot's charging station. 

 
A couple of our most fundamental cognitive tendencies are on display in this incident.

First off, we humans “know” a lot more about the world than we know we know. I mean, we know we can't walk on water but if somebody asked you to make a list of all the things you know, chances are pretty good that you wouldn't put “no water walking” on the list.

It's just “common sense.”

“No water walking” is part of what phenomenological philosopher Alfred Schutz called our “stock of knowledge.” This is the vast collection of tacit, unconscious facts, about the world that makes up the infrastructure of our everyday lives. Without this lived knowledge, every moment would be filled with an exhaustingly complex set of conscious decisions. You know: let go of an object: it falls. Wave a pedestrian across a street: stop driving until she crosses. Answer a question with a shrug: ask me something else.

Robots, on the other hand, only know what we tell them (and, recently, what we “train” them to know through structured learning models.) But there's no way we can train them to know everything we know.

Hell, we don't even know everything we know, so how could we? 


This means that robots will be teaching us things we “know” by doing things like rolling into fountains for the foreseeable future. It's like realizing your child doesn't know that people in England drive on the other side of the road when she steps off a London curb without first looking to the right. Robots will be surprising us with their “stupidity” for years to come. 

Then there's the “memorial.”

I know, I know...people were just “kidding around” when they placed their cards, flowers, and photos. And yet, Oliver Griswold's tweet captured something important about our cognitive operations when he observed: “The future is weird.”

The sight of this barely humanoid (more like, R2D2oid) robot laying in the fountain almost instantly evoked a slew of remarkably consistent anthropomorphized reactions: the robot was “bored,” or “despondent,” “hated its job,” or “overheard some horrible news” and decided to “end it all.”

Moments like this give us a glimpse into just how different many parts of the future will be from the present...how strange it will be to interact with systems that evoke odd emotional reactions while highlighting aspects of our world that we take for granted.

Robots will make us feel both inferior and superior to them at the same time. 


It will be the simplest everyday things that they won't be able to master, things that we'll wryly mock, that will allow us to maintain our comfort in our species' exceptionalism as AI comes to outperform us in one domain after another.

But, sooner or later, that means that we'll have to figure out what's so special about being human and resign ourselves to the odd realities of our very, very weird future. 

 

Tom Guarriello, Ph.D. 

Thanks for subscribing to the RoboPsych Newsletter

If you're not subscribed, please sign up here.  
 
Be sure to subscribe to the RoboPsych Podcast!
Our mailing address is:
tom@RoboPsych.com

unsubscribe from this list    update subscription preferences 
 






This email was sent to *|EMAIL|*
why did I get this?    unsubscribe from this list    update subscription preferences
*|LIST:ADDRESSLINE|*

*|REWARDS|*

Issue #72: What Is Sex For?

*|MC:SUBJECT|*

The RoboPsych Newsletter

Exploring The Psychology of 
Human-Robot Interaction
Issue 72: July 10, 2017


What Is Sex For?
Quote of the Week:

 

In 2017 most liberal societies accept or tolerate sex in many different forms and varieties. Sex toys and masturbation aids have been used for centuries and can be easily purchased in stores in many countries. Now companies are developing robots for sexual gratification. But a robot designed for sex may have different impacts when compared with other sex aids. Those currently being developed are essentially pornographic representations of the human body – mostly female. Such representations combined with human anthropomorphism may lead many to perceive robots as a new ontological category that exists in a fantasy between the living and the inanimate. This is reinforced by robot manufacturers with an eye to the future. They understand the market importance of adding intimacy, companionship, and conversation to sexual gratification.

The aim of this consultation report is to present an objective summary of the issues and various opinions about what could be our most intimate association with technological artefacts. We do not contemplate or speculate about far future robots with personhood - that could have all manner of imagined properties. We focus instead on significant issues that we may have to deal with in the foreseeable future over the next 5 to 10 years.


Our Sexual Future With Robots"
 By Noel Sharkey, Aimee van Wynsberghe, Scott Robins,
Eleanor Hancock


 
Hello, Robot, You Sexy Thing!   

What is sex for?

Crazy question, right? Who doesn't know what sex is for? It's obvious.

But, let's take a step back for a second to consider the question in light of the “consultation report” published this week by the Foundation for Responsible Robotics (FRR). The report was the organization's attempt to answer a set of important questions about human-robot sexuality by surveying the literature and presenting the points of view of leading thinkers and researchers in the field. Questions like, would people have sex with a robot?, what kind of relationship can we have with robots?, or could robots help with sexual healing and therapy? 

Not surprisingly, experts' points of view differ dramatically. 

Some believe sexual contact between humans and robots is an abomination, an unnatural act that objectifies (typically) women and psychologically/ethically harms any humans who engage in it.

Others see this kind of activity as a way for people who are unable to engage in sexual relations with other humans (for whatever reasons) to enjoy the benefits of sexual contact. Human-robot sexuality is a therapeutic tool, they say, to help millions to experience sexual pleasure in a legal, personal, safe manner. The majority who fall into this camp see robot sex as akin to masturbation with a physically present “partner.”

But, what about that peculiar question: what is sex for? 


If you think about it for a minute, you'll start to get a glimpse of the complexities that are right beneath the surface of the initial obviousness.

Sex's first reason for being...it's main “job,” if you will... is procreation.

Technological advances aside, heterosexual sexual relations remains the only reliable method for making new people at scale. Let's call that sex's #1 
functional job. That job...creating other humans in sufficient quantities to replace those who die each year...can only be accomplished via sex with another member of our species.

Never's a long time, but procreation's a job robots are likely to never be able to perform.  

But, sex has other functional jobs as well. Providing physical pleasure, for example. The human body has evolved to deliver a systemic array of pleasurable experiences to accompany the process of sexual arousal and release.

Can robots deliver those experiences? Can they do this job satisfactorily?

History and experience show us that the human imagination is more than up to the task of creating fantasies that enable (at least) a reasonable facsimile of sexual arousal leading to orgasm. Are the physical pleasures of human-human sexual contact and those enabled by human-robot contact qualitatively equivalent? Are they sufficiently similar to make the latter an acceptable substitute for the former? Personal preferences differ, of course, but the long-standing proliferation of sex toys, aids, and apparatus (now including “teledildonics”) indicates that for many people objects are a perfectly acceptable way to get sex's physical pleasure job done. It's reasonable, then, to expect that robots will be more than capable of successfully doing this job. 


What else is sex for?

At some point in our species' evolution, relationships between humans took on emotional characteristics that were experienced and expressible through sexual behavior. The
complex process of becoming intimately emotionally involved with another human became socially (if not physiologically) intertwined with being physically intimate. 

Always? Sometimes? Maybe rarely. For many, never.

That meant that, for some (many?) emotional intimacy became a “nice to have,” but not a “must have,” element of sex. The FRR report touches on the complexities of getting this job done. Prostitutes interviewed for the document reported that customers frequently ask them to feign emotional responses in sexual encounters, wanting them to experience “real” (heartfelt?) orgasms. Robot makers understand this emotional job, and are doing their best to provide artifacts that simulate contextually-accurate “faux-motional” reactions through gestures, facial expressions, sounds, and speech.

And this is where robots and people part ways most clearly. The psychological operations one must engage in to experience sexual-situational-emotional-fulfillment from a robot are non-trivial. One must ignore significant aspects of the encounter (the artificial quality of the robot's verbal and non-verbal cues; the obvious absence of lived-temporality, proactivity, spontaneity; sensory elements like responsive touch, smell, taste) and behave “as if” (i.e., ignore) other artificial qualities. While those psychological  accommodations might be adequate for some of us in certain situations, the prospect of having sex's “emotional intimacy job” performed adequately, and over time, seems far-fetched. Perhaps anthropomorphism can lead us to “love” objects, but no degree of fantasy can credibly get us to have the experience of those objects loving us back. Our “desire to be desired” by another person is a significant part of human sexuality that robots can neither experience nor express.  

That means that human-robot sexual contact by definition violates the first rule of interpersonal behavior: it is inherently one-sided; it can never be inter-personal. Like all Turing-test challenging exercises, then, the very best can only be simulacra, copies of authentic human-human interactions, designed to trick humans into believing (actually, suspending the disbelief that is always right on the edge of awareness) they are in an inter-personal encounter. “One-sided sex” is a special category of what we usually mean when we consider human sexuality.  

Well, so what? Is that so bad? If people who are unable to have sex with others find robots acceptable substitutes, who are we to judge them? That's a question many ask when they imagine our future relationships with robots. The FRR imagines an increase in these kinds of relationships but reaches no conclusions about their impact on either individuals or society. Personally, I'm not inclined to view their proliferation as societally positive. 

The document also considers even more serious questions around the effects robots could have on sexual crimes, including child-sex offenders. Again, FRR chooses not to takes a definite position on this question, choosing rather to present opposing arguments by experts in the field:

This is a question that suffers major disagreement. On one side, there is a small number who believe that expressing disordered or criminal sexual desires with a sex robot would satiate them to the point where they would not have the desire to harm fellow humans. On the other side, there are scholars and therapists who believe that this would be an indulgence that could encourage and reinforce illicit sexual practices. This may work for a few but it is a very dangerous path to tread and research could be very difficult. It may be that allowing people to live out their darkest fantasies with sex robots could have a pernicious effect on society and societal norms and create more danger for the vulnerable. Currently there is a lack of clarity about the law on the distribution of sex robots that are representations of children.

When I was a practicing psychotherapist I treated several sex offenders who exhibited persistent illegal behaviors, even in the face of serious legal consequences. In my experience, a robot, no matter how realistic, would not have led to them forgoing real world encounters.  


Regardless of our individual beliefs, feelings, and thoughts on these issues, human-robot sexual behavior is likely to become commonplace over the next decade. To wit, FRR cites recent examples of UK men bringing sex dolls on dates in public, and pub owners casually accepting the couples as they would human-human partners. 

We see here another example of 21st century technological innovation
challenging us to confront difficult ethical questions (e.g., unemployment, autonomous lethal weapons, self-driving vehicle safety); forcing us to make decisions we never could have imagined being forced to make only a very short while ago.

In this case, one of those questions is, “what is sex for?”

 

Tom Guarriello, Ph.D. 

Thanks for subscribing to the RoboPsych Newsletter

If you're not subscribed, please sign up here.  
 
Be sure to subscribe to the RoboPsych Podcast!
Our mailing address is:
tom@RoboPsych.com

unsubscribe from this list    update subscription preferences 
 






This email was sent to *|EMAIL|*
why did I get this?    unsubscribe from this list    update subscription preferences
*|LIST:ADDRESSLINE|*

*|REWARDS|*

Issue #71: Robots Can't Feel Your Pain, Fear, or Joy. So What?

*|MC:SUBJECT|*

The RoboPsych Newsletter

Exploring The Psychology of 
Human-Robot Interaction
Issue 71: June 26, 2017


What Will Humans Do?
Quote of the Week:

 

Early last year, the World Economic Forum issued a paper warning that technological change is on the verge of upending the global economy. To fill the sophisticated jobs of tomorrow, the authors argued, the ‘reskilling and upskilling of today’s workers will be critical’. Around the same time, the then president Barack Obama announced a ‘computer science for all’ programme for elementary and high schools in the United States. ‘[W]e have to make sure all our kids are equipped for the jobs of the future, which means not just being able to work with computers but developing the analytical and coding skills to power our innovation economy,’ he said.

But the truth is, only a tiny percentage of people in the post-industrial world will ever end up working in software engineering, biotechnology or advanced manufacturing. Just as the behemoth machines of the industrial revolution made physical strength less necessary for humans, the information revolution frees us to complement, rather than compete with, the technical competence of computers. Many of the most important jobs of the future will require soft skills, not advanced algebra.


The Future is Emotional"
 By Marina Benjamin


 
Robots Can't Feel Your Pain, Fear, or Joy  

We humans have been working with machines for less than 200 years. That's no time at all when viewed in the context of our species' 150,000 year run as the planet's dominant force. That makes technology a late addition to our social repertoire. Long before we had machines, we'd developed a suite of social mechanisms for living, communicating, and working together. Unsurprising, then, that our ability to relate to one another emotionally has been a crucial factor in our rise to planetary dominance. 

Think about it. Being able to determine that one of your fellow hunter-gatherers was frightened, angry, surprised, disgusted, sad, or happy made it much more likely that you'd be in tune with other group members; that you'd experience and display resonance with what others were experiencing; that you had what we came to call empathy for what others were going through. Doing so meant you had a much better chance to succeed in the tribe: to survive and to spread your genes. This made our emotionally aware and expressive ancestors evolutionary winners. 

As we became increasingly cognitively and technologically capable, we came to believe that intelligence, as demonstrated by a mastery of abstract concepts and numbers -- particularly through science -- was the key to success.

Over the centuries, emotions came to be seen as “unreliable,” “subjective,” “soft” and, in most cultures, “feminine.” Real men did STEM (Science, Technology, Engineering, Math). That's what made the tools (and weapons) that made the world go round. 

Enter AI and robots. 

In the early days of the 21st century, STEM, the pinnacle of human achievement, became child's play for AI. As a result, no modern human would intentionally choose to manually calculate a rocket's launch details, the return on investment for a field of corn, or the perfect factory production schedule. These were now jobs for machines. And, if expert predictions are anywhere close to right, there will soon be many more jobs that fall into that category.

What's left for humans in a world where machines can do so much?

Our first thoughts take us to “emotional work.” When we hear that phrase today, it conjures up images of child care, nursing care, or elder care; or maybe social work or missionary outreach.

What do those jobs have in common? First off, they're almost guaranteed to be low paying. Emotional work is practically universally economically undervalued. Second, emotional work still carries substantial gender connotations. Caring, “the soft stuff,” is for women; men get real things done.

Around 20 years ago, the phrase “emotional intelligence” (EQ) started showing up in business literature, made popular by Dan Goleman's book of the same name. Leaders of organizations of all kinds began to appreciate that cognitive skills in themselves were not the last word in employee performance. In a world where technical skills were becoming increasingly widespread, organizations saw that an individual's EQ, emotional sensitivity and attunement to others, whether customers or fellow employees, often accounted for significant differences in job performance. Today, consistently demonstrated EQ is often a key differentiator between average and superior performers. 

As we look ahead to a world of machine workers, EQ is likely to become an even more important factor in business success. If the future of work is “take any current job and add AI,” as Kevin Kelley suggests, we might also say that the future of successful employment is “take any AI-assisted job and add EQ.” If every AI customer service agent will be able to provide fast, accurate information, superior customer service will deliver that information in an emotionally intelligent manner. Top quality salespersons will develop emotionally-founded relationships with customers and use every interaction as an opportunity to demonstrate a caring approach to the customer's needs and desires. Physicians, police officers, teachers, and caregiving service providers of all kinds will be distinguished by their empathic resonance with their constituents and their families. 

We see this trend today in the highest levels of luxury services. Every luxury store employs customer service staff who are sensitive to the slightest nuances of their customers' experiences, anticipating reactions from those they know well. High-priced personal care providers do the same. Their goal is to make the customer service, sales, or patient experience as emotionally comfortable as possible. Personalized service to the wealthy feels emotionally intimate. 

We've also seen that services that were once only available to the wealthiest amongst us are now broadly available. Remember when only the rich took limos? Today, your Uber pulls up to your door in a matter of minutes, just like their limos did. Personal shoppers flagged special items so that their their top-spending customers could get first access. Today, Amazon will notify you of items you might be interested in based solely on a suite of rapidly improving algorithms that learn from your prior purchases and social graph. Remember when only the rich had financial advisers?  

What the wealthiest of us enjoy today, the rest of us are likely to experience (in some form) tomorrow. (This is the real meaning of "trickle down economics.")

If that trend continues, we will all have high expectations for the emotional quality of all of our service interactions in the future. We already see the best companies competing for your recommendations to others, using something called Net Promoter Score to evaluate the likelihood that their customers are satisfied, no, happy enough with their experiences to suggest them to their closest friends and family members. 

Emotionally luxurious interactions will become the new standard of excellence when AI takes care of the nuts and bolts of business transactions. No matter how sophisticated robots become, genuine, caring human engagement will always be a step ahead. After all, our ancient ancestors learned the power of forming positive emotional connections with others, a secret that will continue to make all manner of future interactions more satisfying. Unless and until robots can go beyond simulated emotional expressions, every transaction will always have hidden within it an added layer of value that can only be delivered by another, feeling, person. 

 

Tom Guarriello, Ph.D. 

Thanks for subscribing to the RoboPsych Newsletter

If you're not subscribed, please sign up here.  
 
Be sure to subscribe to the RoboPsych Podcast!
Our mailing address is:
tom@RoboPsych.com

unsubscribe from this list    update subscription preferences 
 






This email was sent to *|EMAIL|*
why did I get this?    unsubscribe from this list    update subscription preferences
*|LIST:ADDRESSLINE|*

*|REWARDS|*

Issue #70: Teaching Robots to be Ethical

*|MC:SUBJECT|*

The RoboPsych Newsletter

Exploring The Psychology of 
Human-Robot Interaction
Issue 70: June 16, 2017


Making Robots Ethical
Quote of the Week:


This virtual school, which goes by the name of GoodAI, specialises in educating artificial intelligences (AIs): teaching them to think, reason and act. GoodAI’s overarching vision is to train artificial intelligences in the art of ethics. “This does not mean pre-programming AI to follow a prescribed set of rules where we tell them what to do and what not to do in every possible situation,” says Marek Rosa, a successful Slovak video-game designer and GoodAI’s founder, who has invested $10m in the company. “Rather, the idea is to train them to apply their knowledge to situations they’ve never previously encountered.”

Experts agree that Rosa’s approach is sensible. “Trying to pre-program every situation an ethical machine may encounter is not trivial,” explains Gary Marcus, a cognitive scientist at NYU and CEO and founder of Geometric Intelligence. “How, for example, do you program in a notion like ‘fairness’ or ‘harm’?” Neither, he points out, does this hard-coding approach account for shifts in beliefs and attitudes. “Imagine if the US founders had frozen their values, allowing slavery, fewer rights for women, and so forth? Ultimately, we want a machine able to learn for itself.”

Rosa views AI as a child, a blank slate onto which basic values can be inscribed, and which will, in time, be able to apply those principles in unforeseen scenarios. The logic is sound. Humans acquire an intuitive sense of what’s ethically acceptable by watching how others behave (albeit with the danger that we may learn bad behaviour when presented with the wrong role models).

 

Teaching Robots Right From Wrong
Simon Parkin

Teaching AI Humanity's Rules


Teaching the simplest ethical rules is very hard; things “everybody knows” from childhood turn out to be devilishly tricky to define exactly.

“You can't take something just because you want it.”

“Wait your turn.”

“Tell the truth.” 


Why are these lessons so hard to teach? Because there are so many situations that call for little “common sense” tweaks. You can take something just because you want it if you're in your kitchen, but not if you're in a friend's. Go ahead and get onto the escalator first if the person in front of you hesitates and steps aside. Even if she asks, don't tell your friend the truth about her surprise birthday party on Saturday.

Children learn these lessons in moment-by-moment interactions with others over the first decade or so of life. But what about AI robots? How should we teach these incredible machines that it's not acceptable to make medical, hiring, or credit decisions because of a person's gender or race?; not OK to target recent widows/widowers for “companionship” ads?; unacceptable to encourage children to tell “little white lies” to their parents to get things they
 want?

AI/robotics researchers are currently using different methods to teach machines these simple lessons. One school of thought wants to start by exposing them to a pre-programmed list of persistent ethical values; others worry that this method is too restrictive and can't possibly account for all the situational variations we face in everyday life. Another wants to adopt a “trial and error” method to engender something akin to “guilt” when a robot causes more harm than had been predicted. This means robots making lots of mistakes and possible damage. Still another wants AIs to read millions of stories about typical human interactions and learn the ethical principles that govern them.

Watching these experiments is fascinating. The stakes are high and the range of approaches wide. The “winning” approaches will be those that can deliver decisions that most (almost never, all) ethicists judge as ethically sound. Those companies that act in accordance with their AI's ethically sound decisions (e.g., demonstrate decision-making transparency) are most likely to gain dominant market share in a world increasingly concerned with principled corporate behavior. In the past, brands could operate in relative obscurity, but decades of Enrons, VWs, and Ubers have brought ethics out of the business school backwater into the full light of balance sheets.

Could it be that mid-21st century corporate ethical behavior will rise to unprecedented levels thanks to the widespread application of faithfully-followed machine-learned principles? If so, that could prove to be one of the least predicted, and most ironic, consequences of a revolution that is sure to touch every aspect of modern life over the next decade.    
 
Tom Guarriello, Ph.D. 

Thanks for subscribing to the RoboPsych Newsletter

If you're not subscribed, please sign up here.  
 
Be sure to subscribe to the RoboPsych Podcast!
Our mailing address is:
tom@RoboPsych.com

unsubscribe from this list    update subscription preferences 
 






This email was sent to *|EMAIL|*
why did I get this?    unsubscribe from this list    update subscription preferences
*|LIST:ADDRESSLINE|*

*|REWARDS|*