Should robots have human rights? Vote? Line Bluring between person and machine.
Should robots have human rights? Act now to regulate killer machines before they multiply and demand the right to vote, warns legal expert
Robots will need new laws to regulate them just like the internet did
Army and tech firms have driven robotics and artificial intelligence
There is rising concern about the dangers of these technologies
Experts warn artificial intelligence could be as dangerous as nuclear weapons
By Jack Millner For Mailonline
Published: 08:22 EST, 20 July 2015 | Updated: 11:20 EST, 20 July 2015
A legal expert has warned that the laws that govern robotics are playing catch-up to the technology and need to be updated in case robots 'wake up' and demand rights.
He also argues that artificial intelligence has come of age, and that we should begin tackling these problems before they arise, as robots increasingly blur the line between person and machine.
'Robotics combines, for the first time, the promiscuity of data with the capacity to do physical harm,' Ryan Calo, from the University of Washington’s School of Law, wrote in his paper on the subject.
'Robotic systems accomplish tasks in ways that cannot be anticipated in advance; and robots increasingly blur the line between person and instrument.'
There has been rising concern about the potential danger of artificial intelligence to humans, with prominent figures including Stephen Hawking and Elon Musk wading in on the debate.
In January both signed an open letter to AI researchers warning of the dangers of artificial intelligence.
The letter warns that without safeguards on the technology, mankind could be heading for a dark future, with millions out of work or even the demise of our species.
Legal expert Calo outlines a terrifying thought experiment detailing how our laws might need an update to deal with the challenges posed by robots demanding the right to vote.
'Imagine that an artificial intelligence announces it has achieved self-awareness, a claim no one seems able to discredit,' Calo wrote.
'Say the intelligence has also read Skinner v. Oklahoma, a Supreme Court case that characterizes the right to procreate as “one of the basic civil rights of man.”
'The machine claims the right to make copies of itself (the only way it knows to replicate). These copies believe they should count for purposes of representation in Congress and, eventually, they demand a pathway to suffrage.
'Of course, conferring such rights to beings capable of indefinitely self-copying would overwhelm our system of governance.
'Which right do we take away from this sentient entity, then, the fundamental right to copy, or the deep, democratic right to participate?'
In other developments, last week computer scientist Professor Stuart Russell said that artificial intelligence could be as dangerous as nuclear weapons.
In an interview with the journal Science for a special edition on Artificial Intelligence, he said: 'From the beginning, the primary interest in nuclear technology was the "inexhaustible supply of energy".
'The possibility of weapons was also obvious. I think there is a reasonable analogy between unlimited amounts of energy and unlimited amounts of intelligence.
'Both seem wonderful until one thinks of the possible risks. In neither case will anyone regulate the mathematics.
'The regulation of nuclear weapons deals with objects and materials, whereas with AI it will be a bewildering variety of software that we cannot yet describe.'
IS IT WRONG TO KICK A ROBOTIC DOG?
Google's Boston Dynamics has released a video designed to show off a smaller, lighter version of its robotic dog called Spot.
But the video received an unexpected backlash after people began complaining that the 'dog' in the clip had been mistreated.
During the footage, employees are seen kicking Spot to prove how stable the machine is on its feet, but this has been dubbed 'cruel', 'wrong' and has even raised concerns about robotic ethics.
The four-legged, 160lb (73kg) robo-pet can run, climb stairs, jog next to its owner and correct its balance on uneven terrain, and when kicked.
It was built by Google-owned Boston Dynamics and is the 'little brother' of the firm's larger Cujo, or 'big dog'.
Boston Dynamics has not revealed what Spot will be used for, but its video showed the robot-animal climbing up and down hills, walking through offices and, of course, being kicked repeatedly.
Following the video's release, viewers posted their concerns on Twitter. One user wrote: 'Kicking a dog, even a robot dog seems wrong.'
Another said: 'Just wrong, kick a robot dog as practice: Google's dog robot looks too real for comfort when getting kicked.'
A third added: 'When I first saw [the] Boston Dynamics video I was very disturbed regarding dog-kicking. I'm not the only one.'