Scientists Ponder How to Create Artificial Intelligence That Won’t Destroy Us
Scientists Ponder How to Create Artificial Intelligence
That Won’t Destroy Us
Researchers take responsibility for the futuristic
monster they may build
By Jack Clark December 18, 2015 — 4:00 AM PST
The creators of artificially intelligent machines are
often depicted in popular fiction as myopic Dr. Frankensteins who are oblivious
to the apocalyptic technologies they unleash upon the world. In real life, they
tend to wring their hands over the big questions: good versus evil and the
impact the coming wave of robots and machine brains will have on human workers.
Scientists, recognizing their work is breaking out of the
research lab and into the real world, grappled during a daylong summit on Dec.
10 in Montreal with such ethical issues as how to prevent computers that are
smarter than humans from putting people out of work, adding complications to
legal proceedings, or, even worse, seeking to harm society. Today’s AI can
learn how to play video games, help automate e-mail responses, and drive cars
under certain conditions. That’s already provoked concerns about the effect it
may have workers.
"I think the biggest challenge is the challenge to
employment," said Andrew Ng, the chief scientist for Chinese search engine
Baidu Inc., which announced last week that one of its cars had driven itself on
a 30 kilometer (19 mile) route around Beijing with no human required. The speed
with which AI advances may change the workplace means "huge numbers of
people in their 20s and 40s and 50s" would need to be retrained in a way
that’s never happened before, he said.
"There’s no doubt that there are classes of jobs
that can be automated today that could not be automated before," said Erik
Brynjolfsson, an economist at the Massachusetts Institute of Technology, citing
workers such as junior lawyers tasked with e-discovery or people manning the
checkout aisles in self-checkout supermarkets.
"You hope that there are some new jobs needed in
this economy," he said. "Entrepreneurs and managers haven’t been as
creative in inventing the new jobs as they have been in automating some of the
existing jobs."
Yann LeCun, Facebook’s director of AI research, isn’t as
worried, saying that society has adapted to change in the past. "It’s
another stage in the progress of technology," LeCun said. "It’s not
going to be easy, but we’ll have to deal with it."
There are other potential quandaries, like how the legal
landscape will change as AI starts making more decisions independent of any
human operator. "It would be very difficult in some cases to bring an
algorithm to the fore in the context of a legal proceeding," said Ian
Kerr, the Canada Research Chair in Ethics, Law & Technology at the
University of Ottawa Faculty of Law. "I think it would be a tremendous challenge."
Others are looking further ahead, trying to analyze the
effects of AI that exceeds human capabilities. Last year, Google acquired
DeepMind, an AI company focusing on fundamental research with the goal of
developing machines that are smarter than people. Demis Hassabis, one of the
company’s founders, described it as an Apollo program for the creation of
artificial intelligence.
"I don’t want to claim we know when we’ll do
it," said Shane Legg, another founder of the company. "Being prepared
ahead of time is better than being prepared after."
While they think the chance is small that a malicious
super-intelligence can be developed, Legg and others have set out to study the
potential effects because of the profound threat it could pose.
"I don’t think the end stage is the world we now
have with waiter robots who bring you your food on a tray," said Nick
Bostrom, an Oxford academic whose book Superintelligence: Paths, Dangers,
Strategies has informed the discussion about the implications of intelligent
machines. "The end result might be something that looks very different from
what we are familiar with."
Shahar Avin, a researcher at the University of
Cambridge’s Centre for the Study of Existential Risk, said it’s too early in AI
research to have a good way to study how to prevent malignant AI from forming.
"We want an agent that cannot or will not modify its
own value system," Avin said. It’s an open question how to do this, he
said. A combination of more funding and more public debate should bring more
researchers into the field to study how to make AI safe.
As part of the effort, Elon Musk, founder of Tesla Motors
Inc. and Space Exploration Technologies Corp., and other tech luminaries
announced the creation on Dec. 11 of OpenAI, a nonprofit research group
dedicated to developing powerful new AI technologies in as open a manner as possible.
If super-intelligence is inevitable, it’s best to build
it in the open and encourage people to think about its consequences, Musk said.
He also funded the Future of Life Institute, an organization dedicated to
exploring some of the risks posed to humanity by new technologies, including
AI.
Ng said, however, that the fascination of "AI evil
intelligences" may prove a distraction to much more likely negative
effects, such as job losses. And LeCun said AI could be better and kinder than
people.
"We’re driven by basic instincts that were built
into us for survival," LeCun said. "Machines will have a very, very
different type of intelligence. It will not have the drives that make people do
bad things to each other."
David Johnston, the governor general of Canada, made some
remarks at a separate event in Toronto, that captured some of these anxieties.
"What we do increasingly sense is that machine
learning and artificial intelligence are likely to emerge reasonably soon, and
when they do, we know the impact will be significant," Johnston said.
"So, what role can we play, and how do we maximize the opportunities of
this technology, and minimize the challenges that arise?"
Comments
Post a Comment