'Artificial Intelligence is as dangerous as NUCLEAR WEAPONS': AI pioneer warns smart computers could doom mankind
'Artificial Intelligence is as dangerous as NUCLEAR
WEAPONS': AI pioneer warns smart computers could doom mankind
Expert warns advances in AI mirrors research that led to
nuclear weapons
He says AI systems could have objectives misaligned with
human values
Companies and the military could allow this to get a
technological edge
He urges the AI community to put human values at the
centre of their work
By Richard Gray for MailOnline
Published: 09:30 EST, 17 July 2015 | Updated: 17:07 EST, 17 July 2015
Artificial intelligence has the potential to be as
dangerous to mankind as nuclear weapons, a leading pioneer of the technology
has claimed.
Professor Stuart Russell, a computer scientist who has
lead research on artificial intelligence, fears humanity might be 'driving off
a cliff' with the rapid development of AI.
He fears the technology could too easily be exploited for
use by the military in weapons, putting them under the control of AI systems.
Leading artificial intelligence pioneer Stuart Russell
has compared artificial intelligence to the development of nuclear weapons. He
particularly fears what will happen if AI is used in weapons and military
systems. Films such as Terminator have given what some believe is a glimpse of
what could happen
He points towards the rapid development in AI
capabilities by companies such as Boston Dynamics, which was recently acquired
by Google, to develop autonomous robots for use by the military.
Professor Russell, who is a researcher at the University
of California in Berkeley and the Centre for the study of Existential Risk at
Cambridge University, compared the development of AI to the work that was done
to develop nuclear weapons.
GOOGLE SETS UP AI ETHICS BOARD TO CURB THE RISE OF THE
ROBOTS
Google has set up an ethics board to oversee its work in
artificial intelligence.
The search giant has recently bought several robotics
companies, along with Deep Mind, a British firm creating software that tries to
help computers think like humans.
One of its founders warned artificial intelligence is
'number one risk for this century,' and believes it could play a part in human
extinction.
'Eventually, I think human extinction will probably
occur, and technology will likely play a part in this,' DeepMind's Shane Legg
said in a recent interview.
Among all forms of technology that could wipe out the
human species, he singled out artificial intelligence, or AI, as the 'number 1
risk for this century.'
The ethics board, revealed by web site The Information,
is to ensure the projects are not abused.
Neuroscientist Demis Hassabis, 37, founded DeepMind two
years ago with the aim of trying to help computers think like humans.
His views echo those of people like Elon Musk who have
warned recently about the dangers of artificial intelligence.
Professor Stephen Hawking also joined a group of leading
experts to sign an open letter warning of the need for safeguards to ensure AI
has a positive impact on mankind.
In an interview with the journal Science for a special
edition on Artificial Intelligence, he said: 'From the beginning, the primary
interest in nuclear technology was the "inexhaustible supply of
energy".
'The possibility of weapons was also obvious. I think
there is a reasonable analogy between unlimited amounts of energy and unlimited
amounts of intelligence.
'Both seem wonderful until one thinks of the possible
risks. In neither case will anyone regulate the mathematics.
'The regulation of nuclear weapons deals with objects and
materials, whereas with AI it will be a bewildering variety of software that we
cannot yet describe.
'I'm not aware of any large movement calling for
regulation either inside or outside AI, because we don't know how to write such
regulation.'
This week Science published a series of papers
highlighting the progress that has been made in artificial intelligence
recently.
In one, researchers describe the pursuit of a computer
that is able to make rational economic decisions away from humans while another
outlines how machines are learning from 'big data'.
Nuclear research was conducted with the aim of producing
a new energy source, but scientists also knew that it could be used to create
weapons of great power. Professor Russell warns AI could be put to similar use
if researchers are not careful.
Professor Russell, however, cautions that this unchecked
development of technology can be dangerous if the consequences are not fully
explored and regulation put in place.
He said: 'Here's what Leo Szilard wrote in 1939 after
demonstrating a [nuclear] chain reaction: 'We switched everything off and went
home. That night, there was very little doubt in my mind that the world was
headed for grief.'
'To those who say, well, we may never get to human-level
or superintelligent AI, I would reply: It's like driving straight toward a
cliff and saying, 'Let's hope I run out of gas soon!'
In April Professor Russell raised concerns at a United
Nations meeting in Geneva over the dangers of putting military drones and
weapons under the control of AI systems.
He joins a growning number of experts who have warned
that scenarios like those seen in films from Terminator, AI and 2001: A Space
Odyssey are not beyond the realms of possibility.
Elon Musk is one of the driving forces behind
super-intelligent computers but last year, the Tesla founder warned in a Tweet that
AI could to do more harm than nuclear weapons
He: 'The basic scenario is explicit or implicit value
misalignment - AI systems [that are] given objectives that don't take into
account all the elements that humans care about.
'The routes could be varied and complex—corporations
seeking a supertechnological advantage, countries trying to build [AI systems]
before their enemies, or a slow-boiled frog kind of evolution leading to
dependency and enfeeblement not unlike EM Forster's The Machine Stops.'
EM Forster's short story tells of a post-apocalyptic
world where humanity lives underground and relies on a giant machine to survive,
which then begins to malfunction.
Professor Russell said computer scienitsts needed to
modify the goals of their research to ensure human values and objectives remain
central to the development of AI technology.
He said students needed to be trained to treat these
objectives much in the same way 'as containment is central to the goals of
fusion research'.
In an editorial in Science, editors Jelena Stajic,
Richard Stone, Gilbert Chin and Brad Wible, said: 'Triumphs in the field of AI
are bringing to the fore questions that, until recently, seemed better left to
science fiction than to science.
'How will we ensure that the rise of the machines is
entirely under human control? And what will the world be like if truly
intelligent computers come to coexist with humankind?'
Comments
Post a Comment