New study finds it’s harder to turn off a robot when it’s begging for its "life"
New study finds it’s harder to turn off a robot when it’s
begging for its life
The robot told test subjects it was scared of the dark
and pleaded ‘No! Please do not switch me off!’
By James Vincent@jjvincent Aug 2, 2018, 1:15pm EDT
Robots designed to interact socially with humans are
slowly becoming more and more common. They’re appearing as receptionists, tour
guides, security guards, and porters. But how good are we at treating these
robots as robots? A growing body of evidence suggests not good at all. Studies have
repeatedly shown we’re extremely susceptible to social cues coming from
machines, and a recent experiment by German researchers demonstrates that
people will even refuse to turn a robot off — if it begs for its life.
In the study, published in the open access journal PLOS
One, 89 volunteers were recruited to complete a pair of tasks with the help of
Nao, a small humanoid robot. The participants were told that the tasks (which
involved answering a series of either / or questions, like “Do you prefer pasta
or pizza?”; and organizing a weekly schedule) were to improve Nao’s learning
algorithms. But this was just a cover story, and the real test came after these
tasks were completed, and scientists asked participants to turn off the robot.
In roughly half of experiments, the robot protested,
telling participants it was afraid of the dark and even begging: “No! Please do
not switch me off!” When this happened, the human volunteers were likely to
refuse to turn the bot off. Of the 43 volunteers who heard Nao’s pleas, 13
refused. And the remaining 30 took, on average, twice as long to comply
compared to those who did not not hear the desperate cries at all. (Just
imagine that scene from The Good Place for reference.)
When quizzed about their actions, participants who
refused to turn the robot off gave a number of reasons for doing so. Some said
they were surprised by the pleas; others, that they were scared they were doing
something wrong. But the most common response was simply that the robot said it
didn’t want to be switched off, so who were they to disagree?
As the study’s authors write: “Triggered by the
objection, people tend to treat the robot rather as a real person than just a
machine by following or at least considering to follow its request to stay
switched on.”
This finding, they say, builds on a larger theory known
as “the media equation.” This was first established in a 1996 book of the same
name by two psychologists: Byron Reeves and Clifford Nass. Reeves and Nass
theorized that humans tend to treat non-human media (which includes TV, film,
computers, and robots) as if they are human. We talk to machines, reason with
our radios, and console our computers, said Reeves and Nass.
Various studies since have shown how this principle
affects our behavior, especially when it comes to interactions with robots.
We’re more likely to enjoy interacting with a bot that we perceive as having
the same personality type as us, for example, and we’ll happily associate
machines with gender stereotypes. We observe what’s known as the “rule of
reciprocity” when interacting with robots (meaning we tend to be nice to them
when they’re nice to us) and will even take orders from one if it’s presented
as an authority figure.
“Now and in future,” wrote a group of scholars on the
topic in 2006, “there will be more similarities between human-human and
human-machine interactions than differences.”
And this isn’t the first time we’ve tested the “begging
computer does not want to die” scenario. Similar research was carried out in
2007, with a robot resembling a cat that also pleaded for its life.
Participants were forced to turn it off by observing scientists and all of them
did — but not before going through a serious moral struggle.
In a video clip of the experiment, you can see the robot
asking a volunteer: “You’re not really going to switch me off, are you?” The
human says: “Yes I will!” — while failing to do so.
The new study, which was published July 31st, builds on
this earlier work by using a greater number of participants. It also tested
whether it made a difference if the robot was shown to have social skills
before it asked not to be turned off. In some of the trials, Nao expressed
opinions to the human volunteers, told jokes, and shared personal information.
Surprisingly, this social behavior did not have a huge effect on whether the
volunteers “spared” Nao.
So what does all this mean for our machine-filled future?
Are we destined to be manipulated by socially sophisticated bots that know how
to push our buttons? It’s certainly something to be aware of, says Aike
Horstmann, a PhD student at the University of Duisburg-Essen who led the new
study. But, she says, it’s not a huge threat.
“I hear this worry a lot,” Horstmann tells The Verge.
“But I think it’s just something we have to get used to. The media equation
theory suggests we react to [robots] socially because for hundreds of thousands
of years, we were the only social beings on the planet. Now we’re not, and we
have to adapt to it. It’s an unconscious reaction, but it can change.”
In other words: get used to turning off machines, even if
they don’t appear to like it. They’re silicon and electricity, not flesh and
blood.
Comments
Post a Comment