Google CEO: Fears about artificial intelligence are 'very legitimate,'
Fears about artificial intelligence are 'very
legitimate,'
Tony Romm, Drew Harwell and Craig Timberg, The Washington
Post Published 4:02 pm PST, Wednesday, December 12, 2018
Google CEO Sundar Pichai answered a lot of questions
about anti-conservative bias.
Google CEO Sundar Pichai, head of one of the world's
leading artificial intelligence companies, said in an interview this week that
concerns about harmful applications of the technology are "very legitimate"
- but the tech industry should be trusted to responsibly regulate its use.
Speaking with The Washington Post on Tuesday afternoon,
Pichai said that new AI tools - the backbone of innovations such as driverless
cars and disease-detecting algorithms - require companies to set ethical
guardrails and think through how the technology can be abused.
"I think tech has to realize it just can't build it,
and then fix it," Pichai said. "I think that doesn't work."
Tech giants have to ensure that artificial intelligence
with "agency of its own" doesn't harm humankind, Pichai said. He said
he is optimistic about the technology's long-term benefits, but his assessment
of the potential risks of AI parallels that of some tech critics who say the
technology could be used to empower invasive surveillance, deadly weaponry and
the spread of misinformation. Other tech executives, like SpaceX and Tesla
founder Elon Musk, have offered more dire predictions that AI could prove to be
"far more dangerous than nukes."
RECOMMENDED VIDEO
Google's AI technology underpins a range of initiatives,
from the company's controversial China project to the surfacing of hateful
conspiratorial videos on its YouTube subsidiary - a problem he vowed to address
in the coming year. How Google decides to deploy its AI has also sparked recent
employee unrest.
RELATED STORIES
The Future of Video Advertising Is Artificial
Intelligence
Eight simple tricks to keep hackers from ruining
Christmas shopping
Inside Huawei's Secret HQ, China Is Shaping the Future
The top tech stories of 2018
Pichai's call for self-regulation followed his testimony
in Congress, where lawmakers threatened to impose limits on technology in
response to its misuse, including as a conduit for spreading misinformation and
hate speech. His acknowledgement about the potential threats posed by AI was a
critical assertion because the Indian-born engineer often has touted the
world-shaping implications of automated systems that could learn and make
decisions without human control.
Pichai said in the interview that lawmakers around the
world are still trying to grasp AI's effects and the potential need for
government regulation. "Sometimes I worry people underestimate the scale
of change that's possible in the mid-to-long term, and I think the questions
are actually pretty complex," he said. Other tech giants, including
Microsoft, recently have embraced regulation of AI - both by the companies that
create the technology and the governments that oversee its use.
But AI, if handled properly, could have "tremendous
benefits," Pichai explained, including helping doctors detect eye disease
and other ailments through automated scans of health data. "Regulating a
technology in its early days is hard, but I do think companies should
self-regulate," he said. "This is why we've tried hard to articulate
a set of AI principles. We may not have gotten everything right, but we thought
it was important to start a conversation."
Pichai, who joined Google in 2004 and became chief
executive 11 years later, in January called AI "one of the most important
things that humanity is working on." He said it could prove to be
"more profound" for human society than "electricity or
fire." But the race to perfect machines that can operate on their own has
rekindled familiar fears that Silicon Valley's corporate ethos - "move
fast and break things," as Facebook once put it - could result in
powerful, imperfect technology eliminating jobs and harming average people.
Within Google, its AI efforts also have created
controversy: The company faced heavy criticism earlier this year due to its
work on a Defense Department contract involving AI that could automatically tag
cars, buildings and other objects for use in military drones. Some employees
resigned due to what they called Google's profiting off the "business of
war."
Asked about the employee backlash, Pichai told The Post
that his workers were "an important part of our culture." "They
definitely have an input, and it's an important input; it's something I
cherish," he said.
In June, after announcing that Google wouldn't renew the
contract next year, Pichai unveiled a set of AI-ethics principles that included
general bans on developing systems that could be used to cause harm, damage
human rights or aid in "surveillance violating internationally accepted norms."
The company faced earlier criticism for releasing AI
tools that could be misused in the wrong hands. Google's release in 2015 of its
internal machine-learning software, TensorFlow, has helped accelerate the
wide-scale development of AI, but it has also been used to automate the
creation of lifelike fake videos that have been used for harassment and
disinformation.
Google and Pichai have defended the release by saying
that keeping the technology restricted could lead to less public oversight and
prevent developers and researchers from progressing its capabilities in
beneficial ways.
"Over time, as you make progress, I think it's
important to have conversations around ethics (and) bias, and make simultaneous
progress," Pichai said during his interview with The Post.
"In some sense, you do want to develop ethical
frameworks, engage noncomputer scientists in the field early on," he said.
"You have to involve humanity in a more representative way, because the
technology is going to affect humanity."
Pichai likened the early work to set parameters around AI
to the academic community's efforts in the early days of genetics research.
"Many biologists started drawing lines on where the technology should
go," he said. "There's been a lot of self-regulation by the academic
community, which I think has been extraordinarily important."
The Google executive said it would be most essential
around the development of autonomous weapons, an issue that's rankled tech
executives and employees. In July, thousands of tech workers representing
companies including Google signed a pledge against developing AI tools that
could be programmed to kill.
Pichai also said he found some hateful, conspiratorial
YouTube videos described in a Washington Post story on Tuesday "abhorrent,"
and he indicated that the company would work to improve its systems for
detecting problematic content. The videos, which had been watched millions of
times on YouTube since appearing in April, discussed baseless allegations that
Democrat Hillary Clinton and her longtime aide Huma Abedin had attacked, killed
and drank the blood of a girl.
Pichai said he had not seen the videos, which he was
questioned about during the congressional hearing, and he declined to say
whether YouTube's shortcomings in this area were a result of limits in the
detection systems or in policies for evaluating whether a particular video
should be removed. But he added, "You'll see us in 2019 continue to do
more here."
Pichai also portrayed Google's efforts to develop a new
product for the government-controlled Chinese internet market as preliminary,
declining to say what the product might be or when it would come to market - if
ever.
Dubbed Project Dragonfly, the effort has caused backlash
among employees and human-rights activists who warn about the possibility of
Google assisting government surveillance in a country that tolerates little
political dissent. When asked whether it's possible that Google might make a product
that allows Chinese officials to know who searches for sensitive terms, such as
the Tiananmen Square massacre, Pichai said it was too soon to make any such
judgments.
"It's a hypothetical," Pichai said. "We
are so far away from being in that position."
Here are the top jobs that AI will eliminate ........
10. Economists - Chances that AI will make it obsolete
sometime in the future: 43 percent
9. Physical scientists - Chances that AI will make it
obsolete sometime in the future: 43 percent
8. Computer programmers - Chances that AI will make it
obsolete sometime in the future: 48 percent
7. Agricultural engineers - Chances that AI will make it
obsolete sometime in the future: 49 percent
6. Personal financial advisers - Chances that AI will
make it obsolete sometime in the future: 58 percent
5. Atmospheric and space scientists - Chances that AI
will make it obsolete sometime in the future: 67 percent
4. Airfield operations specialists - Chances that AI will
make it obsolete sometime in the future: 71 percent
3. Nuclear technicians - Chances that AI will make it
obsolete sometime in the future: 85 percent
2. Budget analysts - Chances that AI will make it
obsolete sometime in the future: 94 percent
1. Compensation and benefits managers - Chances that AI
will make it obsolete sometime in the future: 96 percent
Comments
Post a Comment