Self-driving cars programmed to decide who dies in a crash
Self-driving cars programmed to decide who dies in a
crash
Todd Spangler, Detroit Free Press Published 2:38 p.m. ET
Nov. 23, 2017 | Updated 2:06 p.m. ET Nov. 24, 2017
See how self-driving cars prepare for the real world
inside a private testing facility owned by Google's autonomous car company,
Waymo. USA TODAY
WASHINGTON — Consider this hypothetical:
It’s a bright, sunny day and you’re alone in your
spanking new self-driving vehicle, sprinting along the two-lane Tunnel of Trees
on M-119 high above Lake Michigan north of Harbor Springs. You’re sitting back,
enjoying the view. You’re looking out through the trees, trying to get a
glimpse of the crystal blue water below you, moving along at the
45-mile-an-hour speed limit.
As you approach a rise in the road, heading south, a
school bus appears, driving north, one driven by a human, and it veers sharply
toward you. There is no time to stop safely, and no time for you to take
control of the car.
Does the car:
A. Swerve sharply into the trees, possibly killing you
but possibly saving the bus and its occupants?
B. Perform a sharp evasive maneuver around the bus and
into the oncoming lane, possibly saving you, but sending the bus and its driver
swerving into the trees, killing her and some of the children on board?
C. Hit the bus, possibly killing you as well as the
driver and kids on the bus?
In everyday driving, such no-win choices are may be
exceedingly rare but, when they happen, what should a self-driving car —
programmed in advance — do? Or in any situation — even a less dire one — where
a moral snap judgment must be made?
It's not just a theoretical question anymore, with
predictions that in a few years, tens of thousands of semi-autonomous vehicles
may be on the roads. About $80 billion has been invested in the field. Tech
companies are working feverishly on them, with Google-affiliated Waymo among those
testing cars in Michigan, and mobility companies like Uber and Tesla racing to
beat them. Automakers are placing a big bet on them. A testing facility to
hurry along research is being built at Willow Run in Ypsilanti.
There's every reason for excitement: Self-driving
vehicles will ease commutes, returning lost time to workers; enhance mobility
for seniors and those with physical challenges, and sharply reduce the more
than 35,000 deaths on U.S. highways each year.
But there are also a host of nagging questions to be
sorted out as well, from what happens to cab drivers to whether such vehicles
will create sprawl.
And there is an existential question:
Who dies when the car is forced into a no-win situation?
“There will be crashes,” said Van Lindberg, an attorney
in the Dykema law firm's San Antonio office who specializes in autonomous
vehicle issues. “Unusual things will happen. Trees will fall. Animals, kids
will dart out.” Even as self-driving cars save thousands of lives, he said,
“anyone who gets the short end of that stick is going to be pretty unhappy
about it.”
Few people seem to be in a hurry to take on these
questions, at least publicly.
It’s unaddressed, for example, in legislation moving
through Congress that could result in tens of thousands of autonomous vehicles
being put on the roads. In new guidance for automakers by the U.S. Department
of Transportation, it is consigned to a footnote that says only that ethical
considerations are "important" and links to a brief acknowledgement
that "no consensus around acceptable ethical decision-making" has
been reached.
Whether the technology in self-driving cars is superhuman
or not, there is evidence that people are worried about the choices
self-driving cars will be programmed to take.
Last year, for instance, a Daimler executive set off a
wave of criticism when he was quoted as saying its autonomous vehicles would
prioritize the lives of its passengers over anyone outside the car. The company
later insisted he’d been misquoted, since it would be illegal “to make a
decision in favor of one person and against another.”
Last month, Sebastian Thrun, who founded Google’s
self-driving car initiative, told Bloomberg that the cars will be designed to
avoid accidents, but that “If it happens where there is a situation where a car
couldn’t escape, it’ll go for the smaller thing.”
But what if the smaller thing is a child?
How that question gets answered may be important to the
development and acceptance of self-driving cars.
Azim Shariff, an assistant professor of psychology and
social behavior at the University of California, Irvine, co-authored a study
last year that found that while respondents generally agreed that a car should,
in the case of an inevitable crash, kill the fewest number of people possible
regardless of whether they were passengers or people outside of the car, they
were less likely to buy any car “in which they and their family member would be
sacrificed for the greater good.”
Self-driving cars could save tens of thousands of lives
each year, Shariff said. But individual fears could slow down acceptance,
leaving traditional cars and their human drivers on the road longer to battle
it out with autonomous or semi-autonomous cars. Already, the American
Automobile Association says three-quarters of U.S. drivers are suspicious of
self-driving vehicles.
“These ethical problems are not just theoretical,” said
Patrick Lin, director of the Ethics and Emerging Sciences Group at California
Polytechnic State University, who has worked with Ford, Tesla and other
autonomous vehicle makers on just such issues.
While he can’t talk about specific discussions, Lin says
some automakers “simply deny that ethics is a real problem, without realizing
that they’re making ethical judgment calls all the time” in their development,
determining what objects the car will "see," how it will predict what
those objects will do next and what the car's reaction should be.
Does the computer always follow the law? Does it slow
down whenever it "sees" a child? Is it programmed to generate a
random "human" response? Do you make millions of computer
simulations, simply telling the car to avoid killing anyone, ever, and program
that in? Is that even an option?
“You can see what a thorny mess it becomes pretty
quickly,” said Lindberg. “Who bears that responsibility? … There are half a
dozen ways you could answer that question leading to different outcomes.”
The trolley problem
Automakers and suppliers largely downplay the risks of
what in philosophical circles is known as “the trolley problem” — named for a
no-win hypothetical situation in which, in the original format, a person
witnessing a runaway trolley could allow it to hit several people or, by
pulling a lever, divert it, killing someone else.
In the circumstance of the self-driving car, it’s often
boiled down to a hypothetical vehicle hurtling toward a crowded crosswalk with
malfunctioning brakes: A certain number of occupants will die if the car
swerves; a number of pedestrians will die if it continues. The car must be
programmed to do one or the other.
Philosophical considerations, aside, automakers argue
it’s all but bunk — it’s so contrived.
“I don't remember when I took my driver’s license test
that this was one of the questions,” said Manuela Papadopol, director of
business development and communications for Elektrobit, a leading automotive
software maker and a subsidiary of German auto supplier Continental AG.
If anything, self-driving cars could almost eliminate
such an occurrence. They will sense such a problem long before it would become
apparent to a human driver and slow down or stop. Redundancies — for brakes,
for sensors — will detect danger and react more appropriately.
“The cars will be smart — I don’t think there's a problem
there. There are just solutions," Papadopol said.
Alan Hall, Ford's spokesman for autonomous vehicles,
described the self-driving car’s capabilities — being able to detect objects
with 360-degree sensory data in daylight or at night — as “superhuman.”
“The car sees you and is preparing different scenarios
for how to respond,” he said.
Lin said that, in general, many self-driving automakers
believe the simple act of braking, of slowing to a stop, solves the trolley
problem. But it doesn't, such as in a theoretical case where you're being
tailgated by a speeding fuel tanker.
Should government decide?
Some experts and analysts believe solving the trolley
problem could be a simple matter of regulators or legislators deciding in
advance what actions a self-driving car should take in a no-win situation. But
others doubt that any set of rules can capture and adequately react to every
such scenario.
The question doesn’t need to be as dramatic as asking who
dies in a crash either. It could be as simple as deciding what to do about
jaywalkers or where a car places itself in a lane next to a large vehicle to
make its passengers feel secure or whether to run over a squirrel that darts
into a road.
Chris Gerdes, who as director of the Center for
Automotive Research at Stanford University has been working with Ford, Daimler
and others on the issue, said the question is ultimately not about deciding who
dies. It's about how to keep no-win situations from happening in the first
place and, when they do occur, setting up a system for deciding who is
responsible.
For instance, he noted California law requires vehicles
to yield the crosswalk to pedestrians but also says pedestrians have a duty not
to suddenly enter a crosswalk against the light. Michigan and many other states
have similar statutes.
Presumably, then, there could be a circumstance in which
the responsibility for someone darting into the path of an autonomous vehicle
at the last minute rests with that person — just as it does under California
law.
But that “forks off into some really interesting questions,"
Gerdes said, such as whether the vehicle could potentially be programmed to
react differently, say, for a child. "Shouldn’t we treat everyone the same
way?” he asked. "Ultimately, it’s a societal decision,” meaning it may
have to be settled by legislators, courts and regulators.
That could result in a patchwork of conflicting rules and
regulations across the U.S.
“States would continue to have that ability to regulate
how they operate on the road,” said U.S. Sen. Gary Peters, D-Mich., one of the authors
of federal legislation under consideration that would allow for tens of
thousands of autonomous vehicles to be tested on U.S. highways in the years to
come. He says that while design and safety standards will rest with federal
regulators, states will continue to impose traffic rules.
Peters acknowledged that it would be “an impossible
standard” to eliminate all crashes. But he argued that people need to remember
that autonomous vehicles will save tens of thousands of lives a year. In 2015,
the consulting firm McKinsey & Co. said research indicated self-driving
cars could reduce traffic fatalities by 90% once fully deployed. More than
37,000 people died in U.S. roads in 2016 -- the vast majority because of human
error.
But researchers, automakers, academics and others
understand something else about self-driving cars and the risks they may still
pose, namely, that for all their promise to reduce accidents, they can't
eliminate them.
“It comes back to whether you want to find ways to
program in specifics or program in desired outcomes,” said Gerdes. “At the end
of the day, you’re still required to come up with what you want the desired
outcomes to be and the desired outcome cannot be to avoid any accidents all the
time.
“It becomes a little uncomfortable sometimes to look at
that."
The hard questions
While some people in the industry, like Tesla’s Elon
Musk, believe fully autonomous vehicles could be on U.S. roads within a few
years, others say it could be a decade or more — and even longer before the
full promise of self-driving cars and trucks is realized.
The trolley problem is just one that has to be cracked
before then.
There are others, like those faced by Daryn Nakhuda, CEO
of Mighty AI, which is in the business of breaking down into data for
self-driving cars all the objects they are going to need to “see” in order to
predict and react. A bird flying at the window. A thrown ball. A mail truck
parked so there is not enough space in the car’s lane to pass without crossing
the center line.
Automakers will have to decide what the car “sees” and
what it doesn’t. Seeing everything around it — and processing it — could be a
waste of limited processing power. Which means another set of ethical and moral
questions.
Then there is the question of how self-driving cars could
be taught to learn and respond to the tasks they are given — the stuff of
science fiction that seems about to come true.
While self-driving cars can be programmed — told what to
do when that school bus comes hurtling toward them —- there are other options. Through millions
of computer simulations and data from real self-driving cars being tested, the
cars themselves can begin to learn the "best" way to respond to a
given situation.
For example, Waymo — Google's self-driving car arm — in a
recent government filing said through trial and error in simulations, it's
teaching its cars how to navigate a tricky left turn against a flashing yellow
arrow at a real intersection in Mesa, Ariz. The simulations — not the
programmers — determine when it's best to inch into the intersection and when
it's best to accelerate through it. And the cars learn how to mimic real
driving.
Ultimately, through such testing, the cars themselves could
potentially learn how best to get from Point A to Point B, just by having
programmed them to discern what "best" means — say the fastest,
safest, most direct route. Through simulation and data shared with real world
conditions, the cars would "learn" and execute the request.
Here's where the science fiction comes in, however.
Playing 'Go'
A computer programmed to “learn” how to play the ancient
Chinese game of Go by just such a means is not only now beating grandmasters
for the first time in history — and long after computers were beating
grandmasters in chess — it is making moves that seem counterintuitive and
inexplicable to expert human players.
What might that look like with cars?
At the American Center for Mobility in Ypsilanti, Mich.,
where a testing ground is being completed for self-driving cars, CEO John
Maddox said vehicles will be able to put to the test what he calls “edge” cases
that vehicles will have to deal with regularly such as not confusing the
darkness of a tunnel with a wall or accurately predicting whether a person is
about to step off a curb or not.
The facility will also play a role, through that testing,
of getting the public used to the idea of what self-driving cars can do, how
they will operate, how they can be far safer than vehicles operated by humans,
even if some questions remain about their functioning.
“Education is critical,” Maddox said. “We have to be able
to demonstration and illustrate how AVs work and how they don’t work.”
As for the trolley problem, most automakers and experts
expect some sort of standard to emerge — even if it's not entirely clear what
it will be.
At SAE International — what was known as the Society of
Automotive Engineers, a global standard-making group — Chief Product Officer
Frank Menchaca said reaching a perfect standard is a daunting, if not
impossible, task, with so many fluid factors involved in any accident: Speed.
Situation. Weather conditions. Mechanical performance.
Even with that standard, there may be no good answer to
the question of who dies in a no-win situation, he said. Especially if it's to
be judged by a human.
“As human beings, we have hundreds of thousands of years
of moral, ethical, religious and social behaviors programmed inside of us,” he
added. “It’s very hard to replicate that.”
Comments
Post a Comment