What could possibly go wrong? AI Scientists Gather to Plot Doomsday Scenarios (and Solutions)
AI Scientists Gather to Plot Doomsday Scenarios (and
Solutions)
Researchers, cyber-security experts and policy wonks ask
themselves: What could possibly go wrong?
by Dina Bass March 2, 2017, 3:00 AM PST
Artificial intelligence boosters predict a brave new
world of flying cars and cancer cures. Detractors worry about a future where
humans are enslaved to an evil race of robot overlords. Veteran AI scientist
Eric Horvitz and Doomsday Clock guru Lawrence Krauss, seeking a middle ground,
gathered a group of experts in the Arizona desert to discuss the worst that
could possibly happen -- and how to stop it.
Their workshop took place last weekend at Arizona State
University with funding from Tesla Inc. co-founder Elon Musk and Skype
co-founder Jaan Tallinn. Officially dubbed "Envisioning and Addressing
Adverse AI Outcomes," it was a kind of AI doomsday games that organized
some 40 scientists, cyber-security experts and policy wonks into groups of attackers
-- the red team -- and defenders -- blue team -- playing out AI-gone-very-wrong
scenarios, ranging from stock-market manipulation to global warfare.
Horvitz is optimistic -- a good thing because machine
intelligence is his life's work -- but some other, more dystopian-minded
backers of the project seemed to find his outlook too positive when plans for
this event started about two years ago, said Krauss, a theoretical physicist
who directs ASU's Origins Project, the program running the workshop. Yet
Horvitz said that for these technologies to move forward successfully and to
earn broad public confidence, all concerns must be fully aired and addressed.
"There is huge potential for AI to transform so many
aspects of our society in so many ways. At the same time, there are rough edges
and potential downsides, like any technology," said Horvitz, managing
director of Microsoft's Research Lab in Redmond, Washington. ``To maximally
gain from the upside we also have to think through possible outcomes in more
detail than we have before and think about how we’d deal with them."
Participants were given "homework" to submit
entries for worst-case scenarios. They had to be realistic -- based on current
technologies or those that appear possible -- and five to 25 years in the
future. The entrants with the "winning" nightmares were chosen to
lead the panels, which featured about four experts on each of the two teams to
discuss the attack and how to prevent it.
Turns out many of these researchers can match
science-fiction writers Arthur C. Clarke and Philip K. Dick for dystopian
visions. In many cases, little imagination was required -- scenarios like
technology being used to sway elections or new cyber attacks using AI are being
seen in the real world, or are at least technically possible. Horvitz cited
research that shows how to alter the way a self-driving car sees traffic signs
so that the vehicle misreads a "stop" sign as "yield.''
The possibility of intelligent, automated cyber attacks
is the one that most worries John Launchbury, who directs one of the offices at
the U.S.'s Defense Advanced Research Projects Agency, and Kathleen Fisher,
chairwoman of the computer science department at Tufts University, who led that
session. What happens if someone constructs a cyber weapon designed to hide
itself and evade all attempts to dismantle it? Now imagine it spreads beyond
its intended target to the broader internet. Think Stuxnet, the computer virus
created to attack the Iranian nuclear program that got out in the wild, but
stealthier and more autonomous.
"We're talking about malware on steroids that is
AI-enabled," said Fisher, who is an expert in programming languages.
Fisher presented her scenario under a slide bearing the words "What could
possibly go wrong?" which could have also served as a tagline for the
whole event.
How did the defending blue team fare on that one? Not
well, said Launchbury. They argued that advanced AI needed for an attack would
require a lot of computing power and communication, so it would be easier to
detect. But the red team felt that it would be easy to hide behind innocuous
activities, Fisher said. For example, attackers could get innocent users to
play an addictive video game to cover up their work.
To prevent a stock-market manipulation scenario dreamed
up by University of Michigan computer science professor Michael Wellman, blue
team members suggested treating attackers like malware by trying to recognize
them via a database on known types of hacks. Wellman, who has been in AI for
more than 30 years and calls himself an old-timer on the subject, said that
approach could be useful in finance.
Beyond actual solutions, organizers hope the doomsday
workshop started conversations on what needs to happen, raised awareness and
combined ideas from different disciplines. The Origins Project plans to make
public materials from the closed-door sessions and may design further workshops
around a specific scenario or two, Krauss said.
DARPA's Launchbury hopes the presence of policy figures
among the participants will foster concrete steps, like agreements on rules of
engagement for cyber war, automated weapons and robot troops.
Krauss, chairman of the board of sponsors of the group
behind the Doomsday Clock, a symbolic measure of how close we are to global
catastrophe, said some of what he saw at the workshop "informed" his
thinking on whether the clock ought to shift even closer to midnight. But don't
go stocking up on canned food and moving into a bunker in the wilderness just
yet.
"Some things we think of as cataclysmic may turn out
to be just fine," he said.
Comments
Post a Comment