China has vowed to become artificial intelligence ‘world leader’ by 2030
TECH-POCALYPSE China to become artificial intelligence
‘world leader’ by 2030 – but will it spark a killer computer arms race?
Experts fear the race to develop a super-smart machine
mind could end up creating the digital destroyer which wipes out humanity
By Jasper Hamill 21st July 2017, 11:40 am Updated: 21st July 2017, 12:02 pm
CHINA has vowed to become a world leader in artificial
intelligence within a decade.
The People’s Republic has revealed plans to spend
billions on developing computers that are capable of thinking for themselves.
However, experts believe an AI version of the space race
could be disastrous for the world and even lead to the creation of a digital
destroyer that ends up wiping out humanity.
Yesterday, China released a “national AI development
plan” which committed it to spending $22.15 billion (£17 billion) on AI
research by 2020 and $59.07 billion (£45 billion) by 2025.
China wants to square up to Western market leaders
including Google and Microsoft, who are racing ahead in the development of
smart computers.
“The local and central government are supporting this AI
effort,” said Rui Yong, chief technology officer at PC maker Lenovo Group.
Rui said officials back AI because it is seen as the
latest “industrial revolution” similar to the advent of the combustion engine,
electricity or the internet.
“They see the fourth industrial revolution as coming,
(and think) we better invest and support and build a very strong ecosystem,”
said Rui.
Lord Martin Rees, a British astronomer, recently claimed
that machines will rule our world within the next few centuries
China’s State Council said the “situation with China on
national security and international competition is complex”, which was part of
the incentive for making a domestic AI push.
“We must take initiative to firmly grasp this new stage
of development for artificial intelligence and create a new competitive edge,”
it said.
Earlier this year, a top thinker said a careless
researcher could accidentally bring about the end of humanity whilst battling
to develop a super-intelligent artificial intelligence.
Nick Bostrom, head of the University of Oxford’s Future
Of Humanity Institute, said that one slip up in the development of artificial
intelligence (AI) could spell curtains for humanity.
“There is a control problem,” he said.
“If you have a very tight tech race to get there first,
whoever invests in safety could lose the race.
“This could exacerbate the risks from out of control AI.”
Once computers are as intelligent as humans – a moment
that tech experts refer to a “singularity” – there is likely there will be an
“intelligence explosion” which sees the machines reach super-intelligence in a
scarily short space of time.
To illustrate the threat to our species, Bostrom gave the
chilling example of a machine that’s built to make paperclips.
If this machine is superintelligent, it may decide that
humans are standing in the way of its paperclip-building drive and kill us all
to enhance its own productivity.
The only way to stop the a potential apocalypse is to
ensure artificially intelligent machines are built in such a way that ensures
they cannot see humans as an enemy or obstacle.
Dr Bostrom said one way to do this is to work
collaboratively on AI, rather than getting locked in an arms race.
“Getting a commitment that AI should be to the benefit of
all is very desirable,” he added.
“Even if the other guy gets there first, if the AI is going
to be used to benefit you then that would reduce incentive to throw caution to
the wind just to try and beat them.”
Professor Stephen Hawking has warned that developing
artificial intelligence could literally be “the last thing we do” as a species.
Sir Martin Rees, the Royal Astronomer, also predicted
that machine life will replace humanity and reign for billions of years beyond
our time on Earth.
With machine intelligence developing at an extremely fast
pace, Rees believes the robot uprising could happen in just a few centuries.
Comments
Post a Comment