Issue #81: Disrupting Disability

*|MC:SUBJECT|*

The RoboPsych Newsletter

Exploring The Psychology of 
Human-Robot Interaction
Issue 81: December 4, 2017


Disrupting Disability
Quote of the Week:

 

“Built tough”: That’s the slogan used in ads for Ford trucks, which are shown hauling massive loads, towing equipment, and roaring across rugged terrain.

But the workers who assemble those trucks in Ford’s manufacturing plants are subject to human frailties. They can suffer from back and shoulder pain as a result of carrying out the repetitive tasks required by their jobs, particularly as they work on chassis suspended above them. Ford estimates that some assembly workers lift their arms about 4,600 times per day, or about 1 million times per year.

So workers on Ford’s assembly lines in two U.S. factories are getting some extra help. In a pilot project, the workers are suiting up with the EksoVest, an upper body exoskeleton from the Bay Area company Ekso Bionics. Ford plans to expand the test to factories in Europe and Latin America.

Ekso Bionics is primarily known for its work in the medical sector: The company sells a lower-body exoskeleton that enables paraplegic people to walk again.

But Russ Angold, cofounder and CTO of Ekso Bionics, tells IEEE Spectrum that he’s also found significant demand in the industrial sector for exoskeletons. “In 2015 we started getting a lot of inquiries, cold calls coming in, with people asking, ‘Where’s our construction exoskeleton, where’s our industrial exoskeleton?’” Angold remembers. “They’d say, ‘You guys are helping paralyzed people walk, can you use the same tech to make our workers stronger?’” 

Eliza Strickland, IEEE Spectrum
Ford Assembly Line Workers Try Out
Exoskeleton Tech to Boost Performance


 
Disrupting Disability  

What if no worker ever got injured in the workplace again?

That's a dream, right?

Every year, approximately 3 million US workers (of over 130 million employed) sustain workplace injuries that lead them to file workers compensation claims for medical treatment and lost income. Those claims add up to hundreds of millions of dollars in payouts and untold productivity losses to business and industry. And, that doesn’t even begin to depict the human costs of the pain, disability, and death caused by industrial accidents every year.

Here’s what we know: robots can definitely help stem these human and financial costs.

Almost daily we hear bold predictions about AI and robots eliminating thousands of jobs around the world. The predictions lead to plenty of FUD (fear, uncertainty, and doubt) for workers, leaders, and economists alike. The logic of these predictions is compelling: companies will look to cut costs by eliminating human workers when automated substitutes become viable. 

But, businesses don’t flourish by cutting costs. Oh, cost cutting will raise profitability in the short run, but thriving companies almost always take those cost savings and invest them back into growth opportunities. Add to those savings the reductions in unemployment compensation insurance and company-paid disability claims, easily over $100 billion annually in the US last year. That tells me that when the kinds of jobs that are injuring workers today can be performed by robots, insightful leaders will create new ways to deliver greater value to their customers, and create the jobs those new ways will require. 

It’s hard for us to imagine a world in which the vast majority of dangerous and dirty jobs are done by robots. After all, human toil has been the source of income for people throughout history. But, when automated systems can perform all the “heavy lifting,” leaving it to humans to use our creative, flexible, cognitive and physical skills, our current acceptance of millions of workplace fatalities and injuries will seem barbaric. Likewise, taking for granted more than 40,000 annual fatalities and 4,000,000 roadway injuries in the US in 2016 will seem bizarre when autonomous vehicles safely move millions of people per day on our streets and highways. 

When we think about a future in which we are living amongst millions of job-capable robots we often find ourselves at a loss to imagine how our own lives will play out. One thing seems certain: a lot fewer of us will be killed or disabled in the workplace as the most perilous jobs are taken over by machines. In the midst of our fears over these developments, it’s important to remember that progress means leaving behind activities that were only necessary because there were no safer, faster, cheaper ways to do them at the time. What we’ve done before is invent new ways to entice one another; there's no reason to believe we won’t do so again.

But, this time without so much risk to life and limb.

 
Tom Guarriello, Ph.D. 

Thanks for subscribing to the RoboPsych Newsletter

If you're not subscribed, please sign up here.  
 
Be sure to subscribe to the RoboPsych Podcast!
 
Our mailing address is:
tom@RoboPsych.com

unsubscribe from this list    update subscription preferences 
 






This email was sent to *|EMAIL|*
why did I get this?    unsubscribe from this list    update subscription preferences
*|LIST:ADDRESSLINE|*

*|REWARDS|*

Issue #80: Trustworthiness In China

*|MC:SUBJECT|*

The RoboPsych Newsletter

Exploring The Psychology of 
Human-Robot Interaction
Issue 80: November 13, 2017


Trustworthiness In China
Quote of the Week:

 

“Imagine a world where many of your daily activities were constantly monitored and evaluated: what you buy at the shops and online; where you are at any given time; who your friends are and how you interact with them; how many hours you spend watching content or playing video games; and what bills and taxes you pay (or not). It's not hard to picture, because most of that already happens, thanks to all those data-collecting behemoths like Google, Facebook and Instagram or health-tracking apps such as Fitbit. But now imagine a system where all these behaviours are rated as either positive or negative and distilled into a single number, according to rules set by the government. That would create your Citizen Score and it would tell everyone whether or not you were trustworthy. Plus, your rating would be publicly ranked against that of the entire population and used to determine your eligibility for a mortgage or a job, where your children can go to school - or even just your chances of getting a date.

A futuristic vision of Big Brother out of control? No, it's already getting underway in China, where the government is developing the Social Credit System (SCS) to rate the trustworthiness of its 1.3 billion citizens. The Chinese government is pitching the system as a desirable way to measure and enhance "trust" nationwide and to build a culture of "sincerity". As the policy states, "It will forge a public opinion environment where keeping trust is glorious. It will strengthen sincerity in government affairs, commercial sincerity, social sincerity and the construction of judicial credibility."

 

Rachael Botsman, Wired Magazine
Big data meets Big Brother as China moves to rate its citizens


 
Trust Is Just a Number  

What if everything you do or say could be rated by everyone you encounter?

What if those ratings were aggregated into a single score; one number that depicted your trustworthiness, employability, social status, attractiveness?

If that sounds like familiar science fiction, it might be because you read Cory Doctorow's 2003 novel, Down and Out in the Magic Kingdom, or Gary Shteyngart's 2010 modern romantic tale, Super Sad True Love Story

Or, maybe you saw Nosedive, Season 3's premiere episode of Netflix's always provocative Black Mirror. 

Oh, and, of course, the granddaddy of them all, George Orwell's original classic, 1984, which forever turned the name “Big Brother” into a constant haunting presence. 

It's pretty clear that this kind of frightening prospect has been with us for decades. 

Now, what if that score was created and monitored by your nation's government and used to determine everything about your life: what kind of job you could get, what medical treatment you could receive, even where you could get dinner reservations? 

And, what if that government was that of the People's Republic of China?

That's what's about to happen if the current (voluntary) pilot program for the country's “Social Credit System” (SCS) goes into (mandatory) effect for all 1.3 billion Chinese citizens in 2020. If the current version of the SCS is implemented, not only would your own behavior be distilled into a single score ranging from 350 to 950, but everything anyone in your social circles does or says would effect your score as well. (From Wired: "People will have an incentive to say to their friends and family, Don't post that. I don't want you to hurt your score but I also don't want you to hurt mine.")

Black Mirror's Nosedive depicts some of the possibilities as lead character Lacie strives to land a new apartment in a gated community restricted to members whose average rating is 4.5 or above on a 5 point scale. She's only a 4.2 when she applies, so getting some 5 star encounter ratings from influencers (4.7s or 4.8s!) is just the ticket for landing her dream home. No spoilers...but...you can probably guess how that goes.

The psychological implications of living in this mega-gamified world of relentless encounter scoring, massive data collection, and the constant computation of points, badges, and leaderboards are stark. So far, though, we're told that:
Higher scores have already become a status symbol, with almost 100,000 people bragging about their scores on Weibo (the Chinese equivalent of Twitter) within months of launch.
This is what happens when a country that lived through decades of state-enforced economic deprivation is plunged into 21st century-style capitalism. That is, normal interpersonal competition for rewards and the status they bring will drive a significant fraction of the population into full-fledged (but, state-managed) competitiveness.

What else can we predict? The Wired article speculates that there will likely be a market for “reputation consultants” who will help people (many of them desperate) to change their lives by raising their singularly significant SCS.

Black markets? Of course. 

Hacking? It goes without saying.

But, here's what the government has to say about the policy:
The Chinese government is pitching the system as a desirable way to measure and enhance "trust" nationwide and to build a culture of "sincerity". As the policy states, "It will forge a public opinion environment where keeping trust is glorious. It will strengthen sincerity in government affairs, commercial sincerity, social sincerity and the construction of judicial credibility."
Think about that: the government believes that “glorious” trust and social sincerity will be fostered and strengthened through constant anonymous evaluation of every aspect of one's life by people whose own fate lies in the hands of every person (rater) with whom one comes in contact. 

Let's be clear: this ubiquitous social management system will have intended and unintended consequences far beyond “social sincerity.” Its unprecedented pervasiveness and complexity guarantees it. 

And, while we in the democratic West may shake our heads and bemoan this horrifying breach of civil liberties in China, our own lives are themselves subject to increasingly invasive scrutiny.

Technology makes scrutiny possible; government policy makes it acceptable; individual compliance makes it normative and oppressive.

As closed circuit TV cameras blanket our environment, as facial recognition capability becomes commonplace, as ingestible drug monitoring microbots become standard procedure for assuring patient medication compliance, our sensitivity to our steady loss of privacy fades.

To say that these systems will enable the growth of “glorious trust” is the kind of doublespeak that Orwell wrote about in 1949. The question now is, how can we use the power of technological monitoring capabilities to ensure trustworthiness without resorting to the paralyzingly totalitarian ethos of China's new Social Credit System? Going too far puts us at risk for a future of fear, suspicion, and subterfuge that could catastrophically set back social cohesion.

If you're interested in more thoughts about China's proposed system, check out Dr. Julie Carpenter and me talking about the issues on the latest episode of the RoboPsych Podcast
 
Tom Guarriello, Ph.D. 

Thanks for subscribing to the RoboPsych Newsletter

If you're not subscribed, please sign up here.  
 
Be sure to subscribe to the RoboPsych Podcast!
 
Our mailing address is:
tom@RoboPsych.com

unsubscribe from this list    update subscription preferences 
 






This email was sent to *|EMAIL|*
why did I get this?    unsubscribe from this list    update subscription preferences
*|LIST:ADDRESSLINE|*

*|REWARDS|*

Issue #79: PopTech Thoughts

*|MC:SUBJECT|*

The RoboPsych Newsletter

Exploring The Psychology of 
Human-Robot Interaction
Issue 79: October 24, 2017


WTF?
Quote of the Week:

 

“Back in 1811, weavers in Britain’s Nottinghamshire took up the banner of the mythical Ned Ludd (who had supposedly smashed mechanical knitting machines thirty years earlier) and staged a rebellion, wrecking the machine looms that were threatening their livelihood. They were right to be afraid. The decades ahead were grim. Machines did replace human labor, and it took time for society to adjust.

But those weavers couldn’t imagine that their descendants would have more clothing than the kings and queens of Europe, that ordinary people would eat the fruits of summer in the depths of winter. They couldn’t imagine that we’d tunnel through mountains and under the sea, that we’d fly through the air, crossing continents in hours, that we’d build cities in the desert with buildings a half mile high, that we’d stand on the moon and put spacecraft in orbit around distant planets, that we would eliminate so many scourges of disease. And they couldn’t imagine that their children would find meaningful work bringing all of these things to life.

 

Tim O'Reilly,
WTF?: What's The Future and Why It's Up To Us


 
Pop!Tech Thoughts  

I was fortunate enough to attend the Pop!Tech conference in Camden, Maine last week. I've been to the event before, but it's been several years since I last attended. It's always an interesting experience, and this time was no different. Of the many thought provoking speakers, Tim O'Reilly's kickoff presentation stands out for me. O'Reilly is the founder and CEO of O'Reilly Media and a fixture on the tech scene for the last four decades. He spoke about his latest book, from which this issue's Quote of The Week is taken.

Tim's thoughts on Luddites previews his views on the possibilities facing us in the age of AI-equipped technologies of all kinds. The key word in that quote?

"Imagine."

While technological developments disrupt jobs they do not eliminate work. Like England's 18th century weavers, we're psychologically predisposed to more significantly fear the losses brought on by automation than we are to see its possibilities. Since the Industrial Revolution conditioned us to think of work and jobs as synonymous, logic says if robots take our jobs, we'll be out of work.

But, as the above slide from O'Reilly's talk clearly demonstrates, there is plenty of work to be done. Just as the weavers' children learned to fly airplanes, ours will learn to deliver remote healthcare, teach children in other countries to code, or create new organizational models supercharging workforce engagement. 

So, there won't be a shortage of work. The question is: Will we apply our vast collective imagination to finding ways to deliver economically valuable goods and services; things that others are willing to pay for?

We must. And, I saw examples in Maine that gave me real hope that we will. 

Take CRISPR. Dr. Kevin Esvelt presented his team's work on what he calls “sculpting evolution,” gene editing technology's potential to modify plants and animals to make them, for example, more energy efficient. Or, to re-engineer mosquito DNA that makes malaria a thing of the past. Or, to increase agricultural production to bring safe, nutritious food to the starving millions in sub-Saharan Africa.

Would those changes mean jobs? I bet they would.

What about eSports? You know, those video games that millions of people watch on YouTube and elsewhere. Do we think there's an economic future there? Well. Asi Burak does. His new book, Power Play: How Video Games Can Save The World is a pretty compelling tale of using games for serious purposes, like enabling peace in the West Bank.

Jobs there? For sure.

From a social perspective, Pop!Tech welcomed Mia Birdsong, who identified loneliness as one of the major public health threats in America. She described her mission as “finding our way back to the community that is in our DNA.” Birdsong spoke of creating experiences that use the love, care, and generosity of spirit that has always been at our core to craft a new version of American life.

What kinds of jobs could we create to enable ourselves to pull together instead of fostering isolation? What if the health benefits of that work could be appropriately valued by industries that rely on resilient, emotionally healthy workers? Could there be jobs there? Very likely.

There were more examples, but I think you get the picture. The future of work and jobs is yet to be written. Just as the Luddites feared losing all to machines, we too are in a moment of uncertainty about what could replace the jobs (so many of them tedious and soul-sapping) that we rely on today for our survival.

The future is an imagination laboratory. As we become increasingly reliant on machines to perform automatable tasks, we will be presented with new ways to make our lives fuller and richer. If we let our scarcity-rooted fears blind us to the possibilities we face, we'll likely fulfill some of our most dystopian stories. But if we use our most human of powers -- our imaginations -- to make realities of the abundant possibilities before us, there's no reason we can't tackle even the most intractable problems we face.

As Pogo once said, “we have met the enemy, and he is us.
 
Tom Guarriello, Ph.D. 

Thanks for subscribing to the RoboPsych Newsletter

If you're not subscribed, please sign up here.  
 
Be sure to subscribe to the RoboPsych Podcast!
 
Our mailing address is:
tom@RoboPsych.com

unsubscribe from this list    update subscription preferences 
 






This email was sent to *|EMAIL|*
why did I get this?    unsubscribe from this list    update subscription preferences
*|LIST:ADDRESSLINE|*

*|REWARDS|*

Issue #78: RoboHysteria

*|MC:SUBJECT|*

The RoboPsych Newsletter

Exploring The Psychology of 
Human-Robot Interaction
Issue 78: October 3, 2017


RoboHysteria
Quote of the Week:

 

“We are surrounded by hysteria about the future of Artificial Intelligence and Robotics. There is hysteria about how powerful they will become how quickly, and there is hysteria about what they will do to jobs.

 

As I write these words on September 2nd, 2017, I note just two news stories from the last 48 hours.

 

Yesterday, in the New York Times, Oren Etzioni, chief executive of the Allen Institute for Artificial Intelligence, wrote an opinion piece titled How to Regulate Artificial Intelligence where he does a good job of arguing against the hysteria that Artificial Intelligence is an existential threat to humanity. He proposes rather sensible ways of thinking about regulations for Artificial Intelligence deployment, rather than the chicken little “the sky is falling” calls for regulation of research and knowledge that we have seen from people who really, really, should know a little better.

 

Today, there is a story in Market Watch that robots will take half of today’s jobs in 10 to 20 years. It even has a graphic to prove the numbers.

 

The claims are ludicrous. [I try to maintain professional language, but sometimes…]

For instance, it appears to say that we will go from 1 million grounds and maintenance workers in the US to only 50,000 in 10 to 20 years, because robots will take over those jobs. How many robots are currently operational in those jobs? ZERO. How many realistic demonstrations have there been of robots working in this arena? ZERO. Similar stories apply to all the other job categories in this diagram where it is suggested that there will be massive disruptions of 90%, and even as much as 97%, in jobs that currently require physical presence at some particular job site. Mistaken predictions lead to fear of things that are not going to happen. Why are people making mistakes in predictions about Artificial Intelligence and robotics, so that Oren Etzioni, I, and others, need to spend time pushing back on them?

 

Rodney Brooks,
The Seven Deadly Sins of Predicting the Future of AI”


 
Are We Done Freaking Out Yet?  

Rodney Brooks is not a happy roboticist!

This issue's Quote of the Week pretty well sums up Brooks' thoughts and feelings about the collective hysteria we've created about how robots and AI will impact employment, and maybe even conspire to wipe us out.

No more jobs! Existential threat!! 

Brooks had to resort to the (unprofessional!) word “ludicrous” to grab our attention.

Of course, Brooks is not only one of robotics' academic pioneers at MIT but also the founder of iRobot and Rethink Robotics. iRobot originated the Roomba and Rethink created Baxter and Sawyer, two of the most productive industrial robots now in use.

If anybody should know about what robots are capable of, it's Brooks. And he thinks we're freaking out due to our misunderstanding of some fundamental principles that underlie our capacity to produce AI-powered robots. 

Brooks' seven deadly sins of predicting the future of AI are as follows:
  1. Over and Under Estimating - This “sin” reflects Brooks' support of futurist Ray Amaro's now famous quote: “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” Brooks cites GPS as a perfect example of this principle and sees AI doom predictions as another.
  2. Imagining Magic - Arthur C. Clarke's insight that, “any sufficiently advance technology is indistinguishable from magic” grounds Brooks' thinking. He says, “when a technology passes that magic line anything one says about it is no longer falsifiable, because it is magic. This is a problem I regularly encounter when trying to debate with people about whether we should fear just plain AGI, let alone cases C or D from above. I am told that I do not understand how powerful it will be. That is not an argument. We have no idea whether it can even exist.” This is like trying to argue with someone who believes in werewolves.
  3. Performance vs. Competence - We humans generalize abilities. If someone is smart in one area, say medicine, we tend to assume she will be smart in others, like investing. Since we now regularly interact with AI systems that can perform a difficult task, like identifying a Frisbee in a photograph, we are prone to believing that that system will also be able to do other things, like tell us how to throw a Frisbee. Not true. Generalizing from performance to competency is a big mistake.
  4. Suitcase Words - When we hear words like “learning,” “intelligence,” or “consciousness” applied to AI, we instinctively jump to ascribing human-level meaning to the terms. But experts will tell us (if they're not too strongly influenced by the opportunity for a sensational headline or article) that machines don't learn, exhibit intelligence, or experience consciousness in anything like the way humans do. So, ascribing our own examples of these capabilities to machines is both natural and deeply mistaken.
  5. Exponentials - We've all seen the Moore's Law charts that plot the historical growth and trajectory in transistor density. Since then, we've become very comfortable speaking in exponential terms. Brooks thinks we should beware. Many “exponential” phenomena are actually S-curves that plateau and may, or may not, continue to grow. He says: “so when you see exponential arguments as justification for what will happen with AI remember that not all so called exponentials are really exponentials in the first place, and those that are can collapse suddenly when a physical limit is hit, or there is no more economic impact to continue them.” Much of AI hysteria is grounded in belief in exponential growth in both hardware and software.
  6. Hollywood Scenarios - Face it, stories like Frankenstein, Terminator, and Blade Runner deeply affect our thinking about AI and robots. Mostly, this effect is unconscious. Say “robot” and lots of people think “killer....” These scenarios have as much connection with the realities of AI's future as The Martian does with interplanetary travel.
  7. Speed of Deployment - Software updates happen relatively quickly. Hardware, not so much. And the kinds of AI scenarios we see in many imagined situations demand lots of new hardware. As Brooks says, “A lot of AI researchers and pundits imagine that the world is already digital, and that simply introducing new AI systems will immediately trickle down to operational changes in the field, in the supply chain, on the factory floor, in the design of products. Nothing could be further from the truth. The impedance to reconfiguration in automation is shockingly mind-blowingly impervious to flexibility.”
 Brooks believes that every doomsday scenario we see today is founded in these “sins.” He says: “In my experience one can always find two or three or four of these problems with their arguments.”

So, the next time you see a shocking prediction of millions of jobs being eliminated in the next five years, or robot overlords treating humans like pets, take a step back and see if the claim can pass Brooks' “sins” test. There's a good chance we're seeing still another example of overheated hype and not the end of the world as we know it.
 
Tom Guarriello, Ph.D. 

Thanks for subscribing to the RoboPsych Newsletter

If you're not subscribed, please sign up here.  
 
Be sure to subscribe to the RoboPsych Podcast!
 
Our mailing address is:
tom@RoboPsych.com

unsubscribe from this list    update subscription preferences 
 






This email was sent to *|EMAIL|*
why did I get this?    unsubscribe from this list    update subscription preferences
*|LIST:ADDRESSLINE|*

*|REWARDS|*

Issue #77: Should We Build Robots Men Can Rape?

*|MC:SUBJECT|*

The RoboPsych Newsletter

Exploring The Psychology of 
Human-Robot Interaction
Issue 77: September 20, 2017


Should We Build Robots That Men Can Rape?
Quote of the Week:


“Sex robots are likely to play an important role in shaping public understandings of sex and of relations between the sexes in the future. This paper contributes to the larger project of understanding how they will do so by examining the ethics of the “rape” of robots. I argue that the design of realistic female robots that could explicitly refuse consent to sex in order to facilitate rape fantasy would be unethical because sex with robots in these circumstances is a representation of the rape of a woman, which may increase the rate of rape, expresses disrespect for women, and demonstrates a significant character defect. Even when the intention is not to facilitate rape, the design of robots that can explicitly refuse consent is problematic due to the likelihood that some users will experiment with raping them. Designing robots that lack the capacity to explicitly refuse consent may be morally problematic depending on which of two accounts of the representational content of sex with realistic humanoid robots is correct."

Robert Sparrow, “Robots, Rape, and Representation”


 
Who Creates/Uses Rape Robots?  

Roboticists are considering building sex robots that encourage their own rape. 

This issue’s quote is the abstract from a recent article that explored ethical questions regarding creating robot sex dolls that can refuse sexual consent.

Think about that for a second. 

We are about to see widespread production of sex dolls whose “repertoire” of “consenting behavior” will range from enthusiastic participation to (potentially) violent verbal and physical resistance. Not to mention the possibility of producing models that could deliberately enable the simulation of child sexual abuse.

Most of us see more than a few worms crawling out of this can.

First off, many cringe at the image of a robot being designed to have sex with a person. Others see potential individual and/or societal benefits. Perhaps even more of us are repelled by the prospect of a robot built with the capacity to withhold consent to sex. Since the (very short) definition of rape is “sex without consent,” these designers would be creating the conditions for humans to routinely “rape” robots which (not who) say “no.” 

Would having sex with that robot be “rape?” 

If we assume that humanoid robots are representations of humans, then the behavior of humanoid robots expressly designed to engage in sexual activities with humans retain some resemblance to those activities taking place between humans. The exact nature of that resemblance may be debated (since “sexual activity” takes place between humans and, infrequently, other animals, not between humans and inanimate objects; we usually call that “masturbation”), but it seems clear the intent of robot designers is to simulate human sexual situations as realistically as possible. 

But, rape physically and psychologically harms people. Robots, on the other hand, are not (yet? ever?) sentient and experience none of the trauma of a human rape victim. So does simulated rape harm anyone? As a representation of rape, many ethicists, including Sparrow in the article quoted above, would say that simulated rape is an ethical equivalent of rape itself. 

Of course, there are those who would say that since a robot is an artifact it is incapable of ever actually “consenting” to sexual contact. But deliberately creating a humanoid robot that would simulate refusing sexual consent -- in essence creating a situation in which a person (over 90% of whom are male) who has purchased the robot largely for the purpose of his sexual pleasure is “denied” that pleasure -- is an invitation to simulated rape, complete with simulated escalating “resistance.” 

Centuries ago, Emmanuel Kant asked a question about this kind of situation: “What sort of person would do that?” Who is the person who chooses to experience sexual pleasure by “forcing himself” on a robot that is simulating refusal? Furthermore, who is the person who designs and produces a robot that credibly simulates the behavior of a woman being raped after withholding sexual consent? 

These are questions that asks us to make connections between behavior and character. We’ve often been told to “punish the behavior, not the person.” Ethically, however, the answers to these questions seem clear. While some arguments (flimsy, in my opinion) have been made that sexual robots which behave in these ways may prevent actual rapes, I think the much more credible point of view is that creating these kinds of robots is reinforcing and strengthening unethical behavior. I believe the chances are better that encouraging this practice actually increases the potential incidence of rape. 

We’re about to enter into an era in which we will be called on to make many ethical decisions we’ve not had to make before. Encouraging any manner of violent sexual behavior against women does not represent the kind of social, scientific, or technological progress to which we aspire. 

We can -- must -- do better than this.

 
Tom Guarriello, Ph.D. 

Thanks for subscribing to the RoboPsych Newsletter

If you're not subscribed, please sign up here.  
 
Be sure to subscribe to the RoboPsych Podcast!
Our mailing address is:
tom@RoboPsych.com

unsubscribe from this list    update subscription preferences 
 






This email was sent to *|EMAIL|*
why did I get this?    unsubscribe from this list    update subscription preferences
*|LIST:ADDRESSLINE|*

*|REWARDS|*

Issue #76: Cold Watermelon on a Hot Summer Day

*|MC:SUBJECT|*

The RoboPsych Newsletter

Exploring The Psychology of 
Human-Robot Interaction
Issue 76: September 6, 2017


Cold Watermelon on a Hot Summer Day
Quote of the Week:


“In more sustained contexts, where humans interact with robots over a significant amount of time and in a diversity of situations, what should the nature of that interaction be? One central issue in this circumstance is whether we (the robot and I) can understand one another – something that may be accomplished by specific communicative practices. Communicating with a robot via speech and/or gesture, however, turns out to be a complicated thing if one aims at smooth and reliable communicative practices. Much of communication depends on implicit aspects – what is not said is sometimes of greater importance than what is explicitly said, and non-conscious gestures, postures, movements and bodily expressions are often more important than consciously produced signs. For this and other reasons there is a vast and growing literature on social cognition in psychology, neuroscience, and philosophy of mind."
 

Shaun Gallagher, “You and I Robot”


 
What Does Watermelon Taste Like?  

You’ve known the answer to that question since you were about 18 months old, I’d bet.

But, can you answer the question: what does watermelon taste like? Without using the word, “watermelon?”

Try it.

I can’t.

Trying to describe the flavor of a ripe, juicy hunk of watermelon to anyone who’s never eaten the fruit isn’t possible for me. To those who have, I can just say, “you know, it tastes like watermelon!”

And, that illustrates just one of the things that will always distinguish us from robots.

The lived-experience of tasting watermelon is an emergent phenomenon that takes place within embodied animals. If you think about it, “tasting watermelon” involves seeing, smelling, touching, hearing, as well as chewing. Even though we can’t know exactly what they’re experiencing, we know that other animals eat watermelon, and appear to enjoy it as much as we do!



Come to think of it, I can’t know exactly what you experience when you eat watermelon! I mean, I assume watermelon tastes the same to you as it does to me, but I can’t actually say that with 100% certainty.

Uh oh. It feels like we’re getting into difficulty territory.

But, we’re not the first to wander into this area. Philosophers, psychologists, neuroscientists, chefs, and moms have been trying for millennia to figure out how we can know that others will share the “same” experience as we do.

Turns out our brains have evolved to enable us to resonate with one another’s experience through a set of structures called “mirror neurons.” As Wikipedia puts it:

“A mirror neuron is a neuron that fires both when an animal acts and when the animal observes the same action performed by another. Thus, the neuron "mirrors" the behavior of the other, as though the observer were itself acting. Such neurons have been directly observed in primate species. Birds have been shown to have imitative resonance behaviors and neurological evidence suggests the presence of some form of mirroring system.”

So, (even though the mechanism is not totally understood and is still being investigated) when I see those apes eating a piece of watermelon (or, it turns out, even when they see me eating one) our mirror neurons fire and create experiences that “mirror” those that the ape has.

Robots, on the other hand, will never “taste” watermelon. Oh, we may equip them with sensors that immediately identify the fruit from its visual, physical, and molecular composition, but no machine will ever, ever, ever, experience the juicy deliciousness of a bite of cold watermelon on a hot day.

Nor will they experience the joy of seeing their newborn child’s face for the first time; or the sorry of learning of the passing of a loved one.

These are existential moments…moments that are deeply intertwined with our bodily, cultural, and personal histories.

In the short article quoted above, philosopher Shaun Gallagher describes the combined power of those histories as our shared “massive hermeneutical background.” Quite a mouthful! What does it mean?

Basically, the phrase refers to the incalculably enormous inventory of embodied experiences that we have lived…that are, in fact, our lives…since birth. Those experiences include all of the sensory and social interactions we’ve ever experienced, most of which are far removed from our everyday awareness. These experiences can be evoked (practically instantaneously) by verbally or non-verbally “referring” to them —re-calling them forth— thereby enabling our “understanding” of what others (which, roughly, is where their “hermeneutical” quality comes in) are experiencing.

Doing so verbally is the magic that masterful describers like Marcel Proust utilize in their finest works. This passage depicting the emergence of a memory slowly recovered from his existential depths upon tasting a morsel of a tea-soaked cookie is a perfect example:

“And suddenly the memory revealed itself. The taste was that of the little piece of madeleine which on Sunday mornings at Combray (because on those mornings I did not go out before mass), when I went to say good morning to her in her bedroom, my aunt Léonie used to give me, dipping it first in her own cup of tea or tisane. The sight of the little madeleine had recalled nothing to my mind before I tasted it; perhaps because I had so often seen such things in the meantime, without tasting them, on the trays in pastry-cooks' windows, that their image had dissociated itself from those Combray days to take its place among others more recent; perhaps because of those memories, so long abandoned and put out of mind, nothing now survived, everything was scattered; the shapes of things, including that of the little scallop-shell of pastry, so richly sensual under its severe, religious folds, were either obliterated or had been so long dormant as to have lost the power of expansion which would have allowed them to resume their place in my consciousness. But when from a long-distant past nothing subsists, after the people are dead, after the things are broken and scattered, taste and smell alone, more fragile but more enduring, more unsubstantial, more persistent, more faithful, remain poised a long time, like souls, remembering, waiting, hoping, amid the ruins of all the rest; and bear unflinchingly, in the tiny and almost impalpable drop of their essence, the vast structure of recollection.” (emphasis added)

No AI, no robot, will ever come this close to bringing a “physically recalled moment” to life for another. Yes, we may teach them to write as if they were Proust, but we will never share with a machine the massive hermeneutical background we share with that long-gone writer that makes such a passage so powerfully vivid.

It’s important that we appreciate the ways in which AI-equipped robots and ourselves will be similar, and the ways in which we will be different. One way to do so will be to ask the latest humanoid version very simple (embodied) questions, like what watermelon tastes like. And when you do so, be sure to look into its eyes very carefully to see the difference between a life form like yourself, and one that is, at best, only trying to simulate the answer; trying to do what it can to keep up the appearance of being a fellow person.

 
Tom Guarriello, Ph.D. 

Thanks for subscribing to the RoboPsych Newsletter

If you're not subscribed, please sign up here.  
 
Be sure to subscribe to the RoboPsych Podcast!
Our mailing address is:
tom@RoboPsych.com

unsubscribe from this list    update subscription preferences 
 






This email was sent to *|EMAIL|*
why did I get this?    unsubscribe from this list    update subscription preferences
*|LIST:ADDRESSLINE|*

*|REWARDS|*

Issue #75: Robots Don't Kill People; People With Robots Kill People

*|MC:SUBJECT|*

The RoboPsych Newsletter

Exploring The Psychology of 
Human-Robot Interaction
Issue 75: August 21, 2017


Keeping Humans in Killing
Quote of the Week:


“As companies building the technologies in Artificial Intelligence and Robotics that may be repurposed to develop autonomous weapons, we feel especially responsible in raising this alarm.

We warmly welcome the decision of the UN’s Conference of the Convention on Certain Conventional Weapons (CCW) to establish a Group of Governmental Experts (GGE) on Lethal Autonomous Weapon Systems. Many of our researchers and engineers are eager to offer technical advice to your deliberations.

We commend the appointment of Ambassador Amandeep Singh Gill of India as chair of the GGE. We entreat the High Contracting Parties participating in the GGE to work hard at finding means to prevent an arms race in these weapons, to protect civilians from their misuse, and to avoid the destabilizing effects of these technologies.

We regret that the GGE’s first meeting, which was due to start today, has been cancelled due to a small number of states failing to pay their financial contributions to the UN. We urge the High Contracting Parties therefore to double their efforts at the first meeting of the GGE now planned for November.

Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.

We therefore implore the High Contracting Parties to find a way to protect us all from these dangers.”

 

An Open Letter to the United Nations
Convention on Certain Conventional Weapons


 
Robots Don't Kill People, People With Robots Kill People 

Killer robots.

The two words go together so naturally that it's easy to forget that they only go together in science fiction. But, Elon Musk and 115 other luminaries today released a letter urging the UN to establish a special group to “protect us from the dangers” of the “lethal autonomous weapon systems” (LAWS) that threaten to become the “third revolution” (after gunpowder and nuclear weapons) in warfare.

The prospect of that wicked looking little tank pictured above zipping around the desert, the mountains, or a city, scanning for hostiles and killing them on sight is an image that's haunted our fiction-fueled-dreams for about a hundred years.

So, it makes sense to ban “killer robots,” right? 

Sure...but...

Earlier this week, I was fortunate enough to spend a couple of days with a group of people who are dedicated to a cause few of us have spent much time thinking about...until the last few weeks, that is. These are folks who spend their lives working on nuclear arms control; some on complete nuclear weapon elimination. Kim-Jong un or not, these people work daily to prevent the horror of nuclear weapon use.

Do you know how many nuclear weapons the nine nuclear powers in the world possess right now?

(Bonus: can you name the nine countries?)

Go ahead...think of a number...I'll wait.

Got it?

~15,000

How close was your estimate? Most of us think the number is much smaller. 

The folks who are calling for banning killer robots...are they equally troubled by that number? Somehow, the massive arsenal held by (spoiler alert!) the United States, United Kingdom, Russia, France, China, Pakistan, India, North Korea, and Israel exists within a belief system of “mutually assured destruction” (MAD). No nation, the thinking goes, will use nuclear weapons against another nation which possesses them because the potential for a devastating counter-strike is too high. 

Now, saying that LAWS would be a “revolution” on a par with nuclear weapons sounds logical. Until you think about it for a minute. Yes, LAWS firing high calibre ammunition or dropping precision guided missiles from drones is horrifying, but their destructive power is puny compare with that of even one of those 15,000 nuclear warheads. And, LAWS whose “kill decisions” are controlled remotely by a human operator sound more reassuring than one with no human in the loop, until we remember there is always a human in the loop! 

Nonsensical fantasies aside, LAWS could only be put into battle situations by people...they will not decide on their own to travel to a foreign land, select targets, and begin eliminating them. Every step of the way, human operators will make decisions, right up to the moment that the robot executes a kill algorithm written by human programmers.

Lately, we've spent lots of time thinking about ethical responsibilities in far-fetched autonomous vehicle scenarios, ultimately deciding that the vehicle itself cannot be held accountable for any injuries or damages it causes. Only people can be accountable.

Same for LAWS. There's no reason to believe that the Uniform Code of Military Justice won't hold the commander of a LAWS unit that destroys a non-combatant village just as accountable as it would the commander of an Army company. Weapon systems do not obviate command responsibility.  

That isn't to say that we shouldn't strive to put LAWS in the same category as chemical, biological, and nuclear weapons. 

But while we're wringing our hands worried about images of a battalion of rogue Terminators, let's not forget that we are faced daily with a world whose safety rests on a foundation of pure MADness. The sooner we can eliminate the very real threat of those weapons...the weapons that really do pose an existential threat to our species and vast swaths of life on the planet...the sooner we can turn our attention to other categories of destructiveness which might currently be more titillating, but which in the end are just the latest extension of our readiness to kill one another with our newest technological innovation.

Clever sleights of Pandora's hand aside, lethal weapons will never be autonomous; they will always have their creators‘ fingerprints all over them. Let's not make believe it could ever be otherwise.    

 
Tom Guarriello, Ph.D. 

Thanks for subscribing to the RoboPsych Newsletter

If you're not subscribed, please sign up here.  
 
Be sure to subscribe to the RoboPsych Podcast!
Our mailing address is:
tom@RoboPsych.com

unsubscribe from this list    update subscription preferences 
 






This email was sent to *|EMAIL|*
why did I get this?    unsubscribe from this list    update subscription preferences
*|LIST:ADDRESSLINE|*

*|REWARDS|*

Issue #74: Cute, Very Cute

*|MC:SUBJECT|*

The RoboPsych Newsletter

Exploring The Psychology of 
Human-Robot Interaction
Issue 74: August 7, 2017


Cute, Very Cute
Quote of the Week:


“Nearly every step wrought havoc upon the prototype walker's frame. Designed to activate landmines in the most direct means possible, the EOD robot was nevertheless persistent enough to pick itself back up after each explosion and hobble forth in search of more damage. It continued on until it could barely crawl, its broken metal belly scraping across the scorched earth as it dragged itself by a single remaining limb. The scene proved to be too much for those in attendance. The colonel in charge of the demonstration quickly put an end to the macabre display, reportedly unable to stand the scene before him. “This test, he charged, was inhumane,” according to the Washington Post.

But how can this be? This was a machine, a mechanical device explicitly built to be blown up in a human's stead. We don't mourn the loss of toasters or coffeemakers beyond the inconvenience of their absence, so why should a gangly robotic hexapod generate any more consternation than a freshly squashed bug? It comes down, in part, to the mind's habit of anthropomorphizing inanimate objects. And it's this mental quirk that could be exactly what humanity needs to climb out of the uncanny valley and begin making emotional connections with the robots around us.

These sorts of emotional connections come more easily in military applications, where soldiers’ lives depend on these devices working as they should. “They would say they were angry when a robot became disabled because it is an important tool, but then they would add ‘poor little guy,’ or they'd say they had a funeral for it,” Dr. Julie Carpenter of the University of Washington wrote in 2013. “These robots are critical tools they maintain, rely on, and use daily. They are also tools that happen to move around and act as a stand-in for a team member, keeping Explosive Ordnance Disposal personnel at a safer distance from harm.”

Andrew Tarantola
Engadget


 
Cute Mind Games 

Isn't that the cutest robot you've ever seen up there in that picture?

Cute. Nature has been working on our design for millions of years, slowly making untold minute changes in both the genes that eventually replicate into the patterns that create human bodies and their capabilities, as well as the environments in which those gene-productions will find themselves.

Can you imagine a time before “cute” existed? When looking at that picture didn't bring a reflexive “awww” to your lips? 

How about this one?



It turns out that the same evolutionary mechanisms that make that puppy irresistible are also at work when we're interacting with robots. 

And, that's no accident. See, we know a lot about “cute.” Biologists have been researching the characteristics that most of us call “cute” for a long time, going back to biologist Konrad Lorenz's work in the late 19th century. Here's how he distinguished cute from not cute. Can you tell which is which?



Of course you can. Cute = images on left; not-cute = images on right.

We've designed objects with cute features for centuries. Why? Because nature has pre-loaded biological mechanisms into our DNA that greatly enhance the probability that we will form attachments with, and nurture, organisms that fit the “cute” mold. (See, human infants.)

We just can't help ourselves. 

Enter robots. 

As soon as we began to imagine artificial humanoid life forms we were forced to confront the question of their cuteness. How cute should they be? That depended on the story you wanted to tell.

Want to tell a tale about the dangers of humans stepping into the divine job of creating life? Sounds like a job for not cute.



How about one about a boy who finds a robot that helps him find his life's purpose after his brother dies? Definitely calls for cute.

 

Now, all those psycho-physical mechanisms that evolved over the years automatically kick in the very second we perceive an object that resembles a human in even the remotest ways. Psychological experiments show that we will attribute human characteristics to anything...really, anything!...that looks or acts like a human in even the remotest ways.

Don't believe me? Go watch this short video. Now, tell someone what you saw. What did you say? Did you describe several geometric shapes moving around a confined space in precise engineering terms? Or did you say something else? Perhaps something about “characters” playing out an ancient tale?   

Even when robot designers do their best to minimize their machines' human resemblance, we humans will find some feature that triggers our anthropomorphic instincts...especially if that machine has even a scintilla of cuteness.  

Well, so what? So cute robots will tug on our heartstrings a little. Big deal.

But, is cute always desirable? Are there times when cute might be a detriment? 

This week's Quote contains a clue to that answer. When a robot's purpose is to perform some dull, dirty, or dangerous task (like removing battlefield Explosive Ordnance Devices; EODs) the last thing we want kicking in are cuteness-triggered nurturing instincts. The quoted colonel's comment that the EOD removal test was “inhumane” is a dead giveaway that this robot was a bit too cute for the task at hand. 

All of which is to say that our desire to create robots with which we will establish emotional connections is not without peril. My podcast co-host, Dr. Julie Carpenter has written that military personnel have, in fact, actually put themselves in danger due to reluctance to send EOD destroying robots into “harm's way.” Then there's the question of inappropriate emotional connections between robots and children. There's little doubt that increasingly capable, cute home robots will become important companions for young children (as well as the elderly). How will we manage these relationships so that people do not make inappropriate decisions based on the power of cuteness?

Robots are going to enter our homes, much as domesticated dogs did centuries ago. Like dogs, robots will require care, and owning them will bring with it both the joys of companionship and some measure of emotional (and physical) risk. It is imperative that robot designers take their responsibilities seriously since they risk triggering some of our deepest evolutionary mechanisms with just the shape of a face, or the “longing” looks of slightly oversized “soulful” eyes.

An object's integrated form/function has rarely been as important it will be in our soon-to-arrive social robots. 

Tom Guarriello, Ph.D. 

Thanks for subscribing to the RoboPsych Newsletter

If you're not subscribed, please sign up here.  
 
Be sure to subscribe to the RoboPsych Podcast!
Our mailing address is:
tom@RoboPsych.com

unsubscribe from this list    update subscription preferences 
 






This email was sent to *|EMAIL|*
why did I get this?    unsubscribe from this list    update subscription preferences
*|LIST:ADDRESSLINE|*

*|REWARDS|*

Issue #73: The Simple Things In Life

*|MC:SUBJECT|*

The RoboPsych Newsletter

Exploring The Psychology of 
Human-Robot Interaction
Issue 73: July 24, 2017


Simple, But Not Easy
Quote of the Week:

 

Robots can do all kinds of things that humans just can’t.

They can sit in the same place for years and weld car parts together with a level of precision that no human can faithfully reproduce. Some smart machines can speak dozens of languages. Others can fly hundreds of feet in the air, surveilling life down below.

But sometimes the biggest challenges for robots are things that we humans take for granted. Like, say, not falling into decorative fountains.

On Monday, a robot security guard from the California-based startup Knightscope fell in a fountain outside of a Washington, D.C., office building. The wheeled robot was making its rounds when it encountered a common feature of manmade environments it’s not built to handle: a fountain with stairs.

...

Presumably, the robot is equipped with computer vision software that should be able to detect obstacles, like bodies of water, or cars. But clearly, its smart eyes didn’t quite work in this instance, demonstrating how difficult it is for robots to navigate a world built for humans.


A Robot Security Guard Rolled Its Way Into A Fountain"
April Glaser, Slate


 
It's The Simple Things In Life   


How old were you when you learned not to try to walk on water?

Crazy question, right? Who doesn't know that you can't do that?  

But, apparently, this Knightscope security robot at a D.C. office building tried to do just that last week with the result that 99.999% of us would have predicted. It fell in a fountain. 

It's the kind of incident that should remind us of a very important fact about even the “smartest” AI-equipped robots: it's the simple things in life that they're clueless about.

The robot's maker, Knightscope, claims that the system is equipped with “advanced anomaly detection,” “autonomous presence,” “live 360 degree video streaming,” and, coming soon (!), “gun detection.”

Cool! 

But, yet, apparently, if the sun reflects off the surface of a fountain in just the right way, it will merrily roll along right into the drink. Kind of like that Tesla did in Florida, in a way.

Twitter wags were quick to proclaim the incident a “suicide” and posted photos of the memorial that people created at the bot's charging station. 

 
A couple of our most fundamental cognitive tendencies are on display in this incident.

First off, we humans “know” a lot more about the world than we know we know. I mean, we know we can't walk on water but if somebody asked you to make a list of all the things you know, chances are pretty good that you wouldn't put “no water walking” on the list.

It's just “common sense.”

“No water walking” is part of what phenomenological philosopher Alfred Schutz called our “stock of knowledge.” This is the vast collection of tacit, unconscious facts, about the world that makes up the infrastructure of our everyday lives. Without this lived knowledge, every moment would be filled with an exhaustingly complex set of conscious decisions. You know: let go of an object: it falls. Wave a pedestrian across a street: stop driving until she crosses. Answer a question with a shrug: ask me something else.

Robots, on the other hand, only know what we tell them (and, recently, what we “train” them to know through structured learning models.) But there's no way we can train them to know everything we know.

Hell, we don't even know everything we know, so how could we? 


This means that robots will be teaching us things we “know” by doing things like rolling into fountains for the foreseeable future. It's like realizing your child doesn't know that people in England drive on the other side of the road when she steps off a London curb without first looking to the right. Robots will be surprising us with their “stupidity” for years to come. 

Then there's the “memorial.”

I know, I know...people were just “kidding around” when they placed their cards, flowers, and photos. And yet, Oliver Griswold's tweet captured something important about our cognitive operations when he observed: “The future is weird.”

The sight of this barely humanoid (more like, R2D2oid) robot laying in the fountain almost instantly evoked a slew of remarkably consistent anthropomorphized reactions: the robot was “bored,” or “despondent,” “hated its job,” or “overheard some horrible news” and decided to “end it all.”

Moments like this give us a glimpse into just how different many parts of the future will be from the present...how strange it will be to interact with systems that evoke odd emotional reactions while highlighting aspects of our world that we take for granted.

Robots will make us feel both inferior and superior to them at the same time. 


It will be the simplest everyday things that they won't be able to master, things that we'll wryly mock, that will allow us to maintain our comfort in our species' exceptionalism as AI comes to outperform us in one domain after another.

But, sooner or later, that means that we'll have to figure out what's so special about being human and resign ourselves to the odd realities of our very, very weird future. 

 

Tom Guarriello, Ph.D. 

Thanks for subscribing to the RoboPsych Newsletter

If you're not subscribed, please sign up here.  
 
Be sure to subscribe to the RoboPsych Podcast!
Our mailing address is:
tom@RoboPsych.com

unsubscribe from this list    update subscription preferences 
 






This email was sent to *|EMAIL|*
why did I get this?    unsubscribe from this list    update subscription preferences
*|LIST:ADDRESSLINE|*

*|REWARDS|*

Issue #72: What Is Sex For?

*|MC:SUBJECT|*

The RoboPsych Newsletter

Exploring The Psychology of 
Human-Robot Interaction
Issue 72: July 10, 2017


What Is Sex For?
Quote of the Week:

 

In 2017 most liberal societies accept or tolerate sex in many different forms and varieties. Sex toys and masturbation aids have been used for centuries and can be easily purchased in stores in many countries. Now companies are developing robots for sexual gratification. But a robot designed for sex may have different impacts when compared with other sex aids. Those currently being developed are essentially pornographic representations of the human body – mostly female. Such representations combined with human anthropomorphism may lead many to perceive robots as a new ontological category that exists in a fantasy between the living and the inanimate. This is reinforced by robot manufacturers with an eye to the future. They understand the market importance of adding intimacy, companionship, and conversation to sexual gratification.

The aim of this consultation report is to present an objective summary of the issues and various opinions about what could be our most intimate association with technological artefacts. We do not contemplate or speculate about far future robots with personhood - that could have all manner of imagined properties. We focus instead on significant issues that we may have to deal with in the foreseeable future over the next 5 to 10 years.


Our Sexual Future With Robots"
 By Noel Sharkey, Aimee van Wynsberghe, Scott Robins,
Eleanor Hancock


 
Hello, Robot, You Sexy Thing!   

What is sex for?

Crazy question, right? Who doesn't know what sex is for? It's obvious.

But, let's take a step back for a second to consider the question in light of the “consultation report” published this week by the Foundation for Responsible Robotics (FRR). The report was the organization's attempt to answer a set of important questions about human-robot sexuality by surveying the literature and presenting the points of view of leading thinkers and researchers in the field. Questions like, would people have sex with a robot?, what kind of relationship can we have with robots?, or could robots help with sexual healing and therapy? 

Not surprisingly, experts' points of view differ dramatically. 

Some believe sexual contact between humans and robots is an abomination, an unnatural act that objectifies (typically) women and psychologically/ethically harms any humans who engage in it.

Others see this kind of activity as a way for people who are unable to engage in sexual relations with other humans (for whatever reasons) to enjoy the benefits of sexual contact. Human-robot sexuality is a therapeutic tool, they say, to help millions to experience sexual pleasure in a legal, personal, safe manner. The majority who fall into this camp see robot sex as akin to masturbation with a physically present “partner.”

But, what about that peculiar question: what is sex for? 


If you think about it for a minute, you'll start to get a glimpse of the complexities that are right beneath the surface of the initial obviousness.

Sex's first reason for being...it's main “job,” if you will... is procreation.

Technological advances aside, heterosexual sexual relations remains the only reliable method for making new people at scale. Let's call that sex's #1 
functional job. That job...creating other humans in sufficient quantities to replace those who die each year...can only be accomplished via sex with another member of our species.

Never's a long time, but procreation's a job robots are likely to never be able to perform.  

But, sex has other functional jobs as well. Providing physical pleasure, for example. The human body has evolved to deliver a systemic array of pleasurable experiences to accompany the process of sexual arousal and release.

Can robots deliver those experiences? Can they do this job satisfactorily?

History and experience show us that the human imagination is more than up to the task of creating fantasies that enable (at least) a reasonable facsimile of sexual arousal leading to orgasm. Are the physical pleasures of human-human sexual contact and those enabled by human-robot contact qualitatively equivalent? Are they sufficiently similar to make the latter an acceptable substitute for the former? Personal preferences differ, of course, but the long-standing proliferation of sex toys, aids, and apparatus (now including “teledildonics”) indicates that for many people objects are a perfectly acceptable way to get sex's physical pleasure job done. It's reasonable, then, to expect that robots will be more than capable of successfully doing this job. 


What else is sex for?

At some point in our species' evolution, relationships between humans took on emotional characteristics that were experienced and expressible through sexual behavior. The
complex process of becoming intimately emotionally involved with another human became socially (if not physiologically) intertwined with being physically intimate. 

Always? Sometimes? Maybe rarely. For many, never.

That meant that, for some (many?) emotional intimacy became a “nice to have,” but not a “must have,” element of sex. The FRR report touches on the complexities of getting this job done. Prostitutes interviewed for the document reported that customers frequently ask them to feign emotional responses in sexual encounters, wanting them to experience “real” (heartfelt?) orgasms. Robot makers understand this emotional job, and are doing their best to provide artifacts that simulate contextually-accurate “faux-motional” reactions through gestures, facial expressions, sounds, and speech.

And this is where robots and people part ways most clearly. The psychological operations one must engage in to experience sexual-situational-emotional-fulfillment from a robot are non-trivial. One must ignore significant aspects of the encounter (the artificial quality of the robot's verbal and non-verbal cues; the obvious absence of lived-temporality, proactivity, spontaneity; sensory elements like responsive touch, smell, taste) and behave “as if” (i.e., ignore) other artificial qualities. While those psychological  accommodations might be adequate for some of us in certain situations, the prospect of having sex's “emotional intimacy job” performed adequately, and over time, seems far-fetched. Perhaps anthropomorphism can lead us to “love” objects, but no degree of fantasy can credibly get us to have the experience of those objects loving us back. Our “desire to be desired” by another person is a significant part of human sexuality that robots can neither experience nor express.  

That means that human-robot sexual contact by definition violates the first rule of interpersonal behavior: it is inherently one-sided; it can never be inter-personal. Like all Turing-test challenging exercises, then, the very best can only be simulacra, copies of authentic human-human interactions, designed to trick humans into believing (actually, suspending the disbelief that is always right on the edge of awareness) they are in an inter-personal encounter. “One-sided sex” is a special category of what we usually mean when we consider human sexuality.  

Well, so what? Is that so bad? If people who are unable to have sex with others find robots acceptable substitutes, who are we to judge them? That's a question many ask when they imagine our future relationships with robots. The FRR imagines an increase in these kinds of relationships but reaches no conclusions about their impact on either individuals or society. Personally, I'm not inclined to view their proliferation as societally positive. 

The document also considers even more serious questions around the effects robots could have on sexual crimes, including child-sex offenders. Again, FRR chooses not to takes a definite position on this question, choosing rather to present opposing arguments by experts in the field:

This is a question that suffers major disagreement. On one side, there is a small number who believe that expressing disordered or criminal sexual desires with a sex robot would satiate them to the point where they would not have the desire to harm fellow humans. On the other side, there are scholars and therapists who believe that this would be an indulgence that could encourage and reinforce illicit sexual practices. This may work for a few but it is a very dangerous path to tread and research could be very difficult. It may be that allowing people to live out their darkest fantasies with sex robots could have a pernicious effect on society and societal norms and create more danger for the vulnerable. Currently there is a lack of clarity about the law on the distribution of sex robots that are representations of children.

When I was a practicing psychotherapist I treated several sex offenders who exhibited persistent illegal behaviors, even in the face of serious legal consequences. In my experience, a robot, no matter how realistic, would not have led to them forgoing real world encounters.  


Regardless of our individual beliefs, feelings, and thoughts on these issues, human-robot sexual behavior is likely to become commonplace over the next decade. To wit, FRR cites recent examples of UK men bringing sex dolls on dates in public, and pub owners casually accepting the couples as they would human-human partners. 

We see here another example of 21st century technological innovation
challenging us to confront difficult ethical questions (e.g., unemployment, autonomous lethal weapons, self-driving vehicle safety); forcing us to make decisions we never could have imagined being forced to make only a very short while ago.

In this case, one of those questions is, “what is sex for?”

 

Tom Guarriello, Ph.D. 

Thanks for subscribing to the RoboPsych Newsletter

If you're not subscribed, please sign up here.  
 
Be sure to subscribe to the RoboPsych Podcast!
Our mailing address is:
tom@RoboPsych.com

unsubscribe from this list    update subscription preferences 
 






This email was sent to *|EMAIL|*
why did I get this?    unsubscribe from this list    update subscription preferences
*|LIST:ADDRESSLINE|*

*|REWARDS|*

Issue #71: Robots Can't Feel Your Pain, Fear, or Joy. So What?

*|MC:SUBJECT|*

The RoboPsych Newsletter

Exploring The Psychology of 
Human-Robot Interaction
Issue 71: June 26, 2017


What Will Humans Do?
Quote of the Week:

 

Early last year, the World Economic Forum issued a paper warning that technological change is on the verge of upending the global economy. To fill the sophisticated jobs of tomorrow, the authors argued, the ‘reskilling and upskilling of today’s workers will be critical’. Around the same time, the then president Barack Obama announced a ‘computer science for all’ programme for elementary and high schools in the United States. ‘[W]e have to make sure all our kids are equipped for the jobs of the future, which means not just being able to work with computers but developing the analytical and coding skills to power our innovation economy,’ he said.

But the truth is, only a tiny percentage of people in the post-industrial world will ever end up working in software engineering, biotechnology or advanced manufacturing. Just as the behemoth machines of the industrial revolution made physical strength less necessary for humans, the information revolution frees us to complement, rather than compete with, the technical competence of computers. Many of the most important jobs of the future will require soft skills, not advanced algebra.


The Future is Emotional"
 By Marina Benjamin


 
Robots Can't Feel Your Pain, Fear, or Joy  

We humans have been working with machines for less than 200 years. That's no time at all when viewed in the context of our species' 150,000 year run as the planet's dominant force. That makes technology a late addition to our social repertoire. Long before we had machines, we'd developed a suite of social mechanisms for living, communicating, and working together. Unsurprising, then, that our ability to relate to one another emotionally has been a crucial factor in our rise to planetary dominance. 

Think about it. Being able to determine that one of your fellow hunter-gatherers was frightened, angry, surprised, disgusted, sad, or happy made it much more likely that you'd be in tune with other group members; that you'd experience and display resonance with what others were experiencing; that you had what we came to call empathy for what others were going through. Doing so meant you had a much better chance to succeed in the tribe: to survive and to spread your genes. This made our emotionally aware and expressive ancestors evolutionary winners. 

As we became increasingly cognitively and technologically capable, we came to believe that intelligence, as demonstrated by a mastery of abstract concepts and numbers -- particularly through science -- was the key to success.

Over the centuries, emotions came to be seen as “unreliable,” “subjective,” “soft” and, in most cultures, “feminine.” Real men did STEM (Science, Technology, Engineering, Math). That's what made the tools (and weapons) that made the world go round. 

Enter AI and robots. 

In the early days of the 21st century, STEM, the pinnacle of human achievement, became child's play for AI. As a result, no modern human would intentionally choose to manually calculate a rocket's launch details, the return on investment for a field of corn, or the perfect factory production schedule. These were now jobs for machines. And, if expert predictions are anywhere close to right, there will soon be many more jobs that fall into that category.

What's left for humans in a world where machines can do so much?

Our first thoughts take us to “emotional work.” When we hear that phrase today, it conjures up images of child care, nursing care, or elder care; or maybe social work or missionary outreach.

What do those jobs have in common? First off, they're almost guaranteed to be low paying. Emotional work is practically universally economically undervalued. Second, emotional work still carries substantial gender connotations. Caring, “the soft stuff,” is for women; men get real things done.

Around 20 years ago, the phrase “emotional intelligence” (EQ) started showing up in business literature, made popular by Dan Goleman's book of the same name. Leaders of organizations of all kinds began to appreciate that cognitive skills in themselves were not the last word in employee performance. In a world where technical skills were becoming increasingly widespread, organizations saw that an individual's EQ, emotional sensitivity and attunement to others, whether customers or fellow employees, often accounted for significant differences in job performance. Today, consistently demonstrated EQ is often a key differentiator between average and superior performers. 

As we look ahead to a world of machine workers, EQ is likely to become an even more important factor in business success. If the future of work is “take any current job and add AI,” as Kevin Kelley suggests, we might also say that the future of successful employment is “take any AI-assisted job and add EQ.” If every AI customer service agent will be able to provide fast, accurate information, superior customer service will deliver that information in an emotionally intelligent manner. Top quality salespersons will develop emotionally-founded relationships with customers and use every interaction as an opportunity to demonstrate a caring approach to the customer's needs and desires. Physicians, police officers, teachers, and caregiving service providers of all kinds will be distinguished by their empathic resonance with their constituents and their families. 

We see this trend today in the highest levels of luxury services. Every luxury store employs customer service staff who are sensitive to the slightest nuances of their customers' experiences, anticipating reactions from those they know well. High-priced personal care providers do the same. Their goal is to make the customer service, sales, or patient experience as emotionally comfortable as possible. Personalized service to the wealthy feels emotionally intimate. 

We've also seen that services that were once only available to the wealthiest amongst us are now broadly available. Remember when only the rich took limos? Today, your Uber pulls up to your door in a matter of minutes, just like their limos did. Personal shoppers flagged special items so that their their top-spending customers could get first access. Today, Amazon will notify you of items you might be interested in based solely on a suite of rapidly improving algorithms that learn from your prior purchases and social graph. Remember when only the rich had financial advisers?  

What the wealthiest of us enjoy today, the rest of us are likely to experience (in some form) tomorrow. (This is the real meaning of "trickle down economics.")

If that trend continues, we will all have high expectations for the emotional quality of all of our service interactions in the future. We already see the best companies competing for your recommendations to others, using something called Net Promoter Score to evaluate the likelihood that their customers are satisfied, no, happy enough with their experiences to suggest them to their closest friends and family members. 

Emotionally luxurious interactions will become the new standard of excellence when AI takes care of the nuts and bolts of business transactions. No matter how sophisticated robots become, genuine, caring human engagement will always be a step ahead. After all, our ancient ancestors learned the power of forming positive emotional connections with others, a secret that will continue to make all manner of future interactions more satisfying. Unless and until robots can go beyond simulated emotional expressions, every transaction will always have hidden within it an added layer of value that can only be delivered by another, feeling, person. 

 

Tom Guarriello, Ph.D. 

Thanks for subscribing to the RoboPsych Newsletter

If you're not subscribed, please sign up here.  
 
Be sure to subscribe to the RoboPsych Podcast!
Our mailing address is:
tom@RoboPsych.com

unsubscribe from this list    update subscription preferences 
 






This email was sent to *|EMAIL|*
why did I get this?    unsubscribe from this list    update subscription preferences
*|LIST:ADDRESSLINE|*

*|REWARDS|*

Issue #70: Teaching Robots to be Ethical

*|MC:SUBJECT|*

The RoboPsych Newsletter

Exploring The Psychology of 
Human-Robot Interaction
Issue 70: June 16, 2017


Making Robots Ethical
Quote of the Week:


This virtual school, which goes by the name of GoodAI, specialises in educating artificial intelligences (AIs): teaching them to think, reason and act. GoodAI’s overarching vision is to train artificial intelligences in the art of ethics. “This does not mean pre-programming AI to follow a prescribed set of rules where we tell them what to do and what not to do in every possible situation,” says Marek Rosa, a successful Slovak video-game designer and GoodAI’s founder, who has invested $10m in the company. “Rather, the idea is to train them to apply their knowledge to situations they’ve never previously encountered.”

Experts agree that Rosa’s approach is sensible. “Trying to pre-program every situation an ethical machine may encounter is not trivial,” explains Gary Marcus, a cognitive scientist at NYU and CEO and founder of Geometric Intelligence. “How, for example, do you program in a notion like ‘fairness’ or ‘harm’?” Neither, he points out, does this hard-coding approach account for shifts in beliefs and attitudes. “Imagine if the US founders had frozen their values, allowing slavery, fewer rights for women, and so forth? Ultimately, we want a machine able to learn for itself.”

Rosa views AI as a child, a blank slate onto which basic values can be inscribed, and which will, in time, be able to apply those principles in unforeseen scenarios. The logic is sound. Humans acquire an intuitive sense of what’s ethically acceptable by watching how others behave (albeit with the danger that we may learn bad behaviour when presented with the wrong role models).

 

Teaching Robots Right From Wrong
Simon Parkin

Teaching AI Humanity's Rules


Teaching the simplest ethical rules is very hard; things “everybody knows” from childhood turn out to be devilishly tricky to define exactly.

“You can't take something just because you want it.”

“Wait your turn.”

“Tell the truth.” 


Why are these lessons so hard to teach? Because there are so many situations that call for little “common sense” tweaks. You can take something just because you want it if you're in your kitchen, but not if you're in a friend's. Go ahead and get onto the escalator first if the person in front of you hesitates and steps aside. Even if she asks, don't tell your friend the truth about her surprise birthday party on Saturday.

Children learn these lessons in moment-by-moment interactions with others over the first decade or so of life. But what about AI robots? How should we teach these incredible machines that it's not acceptable to make medical, hiring, or credit decisions because of a person's gender or race?; not OK to target recent widows/widowers for “companionship” ads?; unacceptable to encourage children to tell “little white lies” to their parents to get things they
 want?

AI/robotics researchers are currently using different methods to teach machines these simple lessons. One school of thought wants to start by exposing them to a pre-programmed list of persistent ethical values; others worry that this method is too restrictive and can't possibly account for all the situational variations we face in everyday life. Another wants to adopt a “trial and error” method to engender something akin to “guilt” when a robot causes more harm than had been predicted. This means robots making lots of mistakes and possible damage. Still another wants AIs to read millions of stories about typical human interactions and learn the ethical principles that govern them.

Watching these experiments is fascinating. The stakes are high and the range of approaches wide. The “winning” approaches will be those that can deliver decisions that most (almost never, all) ethicists judge as ethically sound. Those companies that act in accordance with their AI's ethically sound decisions (e.g., demonstrate decision-making transparency) are most likely to gain dominant market share in a world increasingly concerned with principled corporate behavior. In the past, brands could operate in relative obscurity, but decades of Enrons, VWs, and Ubers have brought ethics out of the business school backwater into the full light of balance sheets.

Could it be that mid-21st century corporate ethical behavior will rise to unprecedented levels thanks to the widespread application of faithfully-followed machine-learned principles? If so, that could prove to be one of the least predicted, and most ironic, consequences of a revolution that is sure to touch every aspect of modern life over the next decade.    
 
Tom Guarriello, Ph.D. 

Thanks for subscribing to the RoboPsych Newsletter

If you're not subscribed, please sign up here.  
 
Be sure to subscribe to the RoboPsych Podcast!
Our mailing address is:
tom@RoboPsych.com

unsubscribe from this list    update subscription preferences 
 






This email was sent to *|EMAIL|*
why did I get this?    unsubscribe from this list    update subscription preferences
*|LIST:ADDRESSLINE|*

*|REWARDS|*