Robot with “morals” makes surprisingly deadly decisions
Robot with “morals” makes surprisingly deadly decisions
By Rob Waugh – 19 hours ago
Anyone excited by the idea of stepping into a driverless
car should read the results of a somewhat alarming experiment at Bristol’s
University of the West of England, where a robot was programmed to rescue
others from certain doom… but often didn’t.
The so-called ‘Ethical robot’, also known as the Asimov
robot, after the science fiction writer whose work inspired the film ‘I,
Robot’, saved robots, acting the part of humans, from falling into a hole: but
often stood by and let them trundle into the danger zone.
The experiment used robots programmed to be ‘aware’ of
their surroundings, and with a separate program which instructed the robot to
save lives where possible.
Despite having the time to save one out of two ‘humans’
from the 'hole', the robot failed to do so more than half of the time. In the
final experiment, the robot only saved the ‘people’ 16 out of 33 times.
The robot’s programming mirrored science fiction writer
Isaac Asimov’s First Law of Robotics, ‘A robot may not injure a human being or,
through inaction, allow a human being to come to harm.’
The robot was programmed to save humans wherever
possible: and all was fine, says roboticist Alan Winfield, least to begin with.
“We introduced a third robot - acting as a second proxy
human. So now our ethical robot would face a dilemma - which one should it
rescue?” says Winfield.
The problem isn’t - thankfully - that robots are enemies
of humankind, but that the robot tried too hard to save lives. Three times out
of 33, the robot manages, through a cunning series of lunges, to save
BOTH. The other times, it appears as if
the robot can’t decide.
“The problem is that the Asimov robot sometimes dithers,”
says Winfield. “It notices one human robot, starts toward it but then almost
immediately notices the other. It changes its mind. And the time lost dithering
means the Asimov robot cannot prevent either robot from falling into the hole.”
“It was a bit unexpected,” Winfield says. “There was
clearly time to save at least one robot, but it just left them half the time.
It stood there, and failed to rescue either, whereas there was clearly time to
save at least one.”
Winfield says that he “did not expect” to make a robot
capable of acting ethically, but having shared his paper with philosophers and
ethicists, one has come back saying the robot was acting with a sort of
“ethics”.
“As ever when you do experiments, the most interesting
results are often the ones you don’t expect,” says Winfield. “We did not set
out to build an ethical robot. We were studying the idea of robots with
internal models of the outside world. We just added an ethical decision-making
layer to the logic: we call it the Consequence Engine. That’s the bit that
makes the robot act ‘ethically’.”
The research could be important, as driverless cars and
other robotic systems are entrusted with (real) human lives already, and could
be launched to the public in the near future.
“We’re finding our way,” says Winfield. “We’ve set out
one proposition about how to create an ethical robot, we’re not claiming this
is the final answer. We’ve started to share our work with other researchers -
but this is an initial exploration. We certainly did not expect to make an
ethical robot.”
Comments
Post a Comment