Alphabet’s Eric Schmidt On Fake News, Russia, And “Information Warfare” - How Many Social Accounts Are Real People?
Alphabet’s Eric Schmidt On Fake News, Russia, And
“Information Warfare”
“One of the things I did not understand,” says Schmidt,
“was that these systems can be used to manipulate public opinion in ways that
are quite inconsistent with what we think of as democracy.”
BY AUSTIN CARR 10.29.17
Ever since the 2016 presidential election, Alphabet, the
tech giant that owns Google, has been under intense scrutiny to acknowledge its
role in trafficking the Russia-backed disinformation campaign that potentially
helped shape the outcome. In some of the most unequivocal comments yet, the company’s
executive chairman, Eric Schmidt, recently acknowledged to Fast Company that
the search giant didn’t do enough to safeguard its services against Russian
manipulation.
“We did not understand the extent to which
governments–essentially what the Russians did–would use hacking to control the
information space. It was not something we anticipated strongly enough,”
Schmidt said. “I worry that the Russians in 2020 will have a lot more powerful
tools.”
Schmidt’s comments, from an August 30 interview published
as part of a new Fast Company feature about how Alphabet is grappling with
digital threats such as fake news and disinformation, offers a preview of how
Google may frame its testimony before the Senate Judiciary and Intelligence
Committees this week.
Public criticism and government scrutiny of leading
technology companies is mounting, and they are being asked to deliver a full
accounting of the ways Russia leveraged their services, in addition to how
they, directly or indirectly, assisted in those efforts. Yet inside Alphabet,
there is a sense among some top executives focused on these challenges that the
company does not owe the public a mea culpa.
In a range of interviews late this summer, these
executives said that Alphabet was actually ahead of the curve on these issues
compared to their Silicon Valley counterparts. “Some of the problems that are
being created are being created because the [tech] companies aren’t fixing
them,” Schmidt told Fast Company. “There are now teams [inside Alphabet] looking
at the technology behind information warfare. Not in the military sense–I mean
in the manipulation sense.”
In the weeks following these conversations, there has
been a stream of revelations detailing how vulnerable technology companies were
to the proliferation of Russia-linked propaganda leading up to the presidential
election. Just this month, the New York Times reported that the Kremlin-backed
news organization RT exploited YouTube and its close relationship with the
Google-owned video network to spread propaganda, and the Washington Post
reported that Google has found evidence that Russian operatives intending to
peddle disinformation spent tens of thousands of dollars through ads on
Google’s platforms such as its search and Gmail products.
Google is actively examining these findings. “We are
taking a deeper look to investigate attempts to abuse our systems, working with
researchers and other companies, and will provide assistance to ongoing
inquiries,” Andrea Faville, a company spokesperson, said in a statement.
Some of the research on the problem of fake news is being
handled by Jigsaw, a think tank-like subsidiary previously known as Google
Ideas. Founded by Schmidt and Jared Cohen, a former State Department policy
staffer who worked under Condoleezza Rice and Hillary Clinton, Jigsaw’s leaders
consider the group an early warning system for potential threats to Google, and
say they first started exploring disinformation in the context of Russia’s
response to Euromaidan, the wave of anti-government protests in Ukraine in late
2013 and early 2014.
“We’ve been looking at the exercises the Russians were
doing in terms of disinformation and misinformation to shape that environment
for years,” says Scott Carpenter, another former State Department official who
now serves as Jigsaw’s managing director and also acts as a liaison with teams
at Google and other Alphabet subsidiaries.
Still, it wasn’t until October 2016 that the group
assisted in launching a tool specifically to address fake news in the United
States. Called Fact Check, the service is embedded in Google News and labels
articles using criteria such as whether they include opinion or highly cited
reporting. “Before the election, people were like, ‘What the fuck do you need a
fact checker for?'” Carpenter recalls. “And then they were like, ‘Oh my god, we
have Fact Check! Look! We did it! Google! In Search! Before the election!'”
Google, in response to growing public criticism since
that time, has created initiatives to address the problem of fake news. In
April, the search giant announced an effort to tweak its algorithm, codenamed
“Project Owl,” to stop “the spread of blatantly misleading, low-quality,
offensive, or downright false information” polluting its search results, as
engineering VP Ben Gomes said in a company blog post.
Despite these moves, a handful of policy experts and
former White House and State Department officials told me that Alphabet–like
Facebook and Twitter–is not moving fast enough nor investing the appropriate
resources to police its platforms. Jigsaw, for example, has just 60 employees,
only a fraction of whom are actually working on issues related to
disinformation. “To be blunt, Silicon Valley has lived in a libertarian fantasy
world with all this ‘Don’t Do Evil’ shit for the last 10 to 20 years, and there
is a realization starting to come over Washington that it’s a real uncontrolled
industry,” says Max Bergmann, a senior fellow at the liberal think tank
American Progress focused on U.S.-Russia geopolitics, who previously served on
former Secretary of State John Kerry’s policy planning staff.
But Cohen, who serves as Jigsaw’s CEO and is also a
senior adviser to Schmidt at Alphabet, says the company can’t solve the fake
news problem immediately. Rather, they are taking a more deliberate approach.
“I always tell the team that we’re not reactive, otherwise, one year we’d be
working on Ebola, and another year we’d be working on fake news,” says Cohen.
Adds Carpenter, “[We’re] trying to tackle this question of fake news but in a
way that we think makes sense, not like, ‘Ah! The house is on fire! We need to
do something about it!’ Forget about doing anything [yet]; we need to
understand what’s actually happening [first].”
Toward that end, in June, Jigsaw sent a team to Macedonia
to understand why the Balkan country has become a haven for producing a disproportionate
amount of the world’s fake news. “We want to look at what we call ‘networked
propaganda,’ the idea that fake news is part of the information food chain that
spreads online,” Carpenter says. “How does the food chain work? How does it
spread from offline to online back to offline? What are the distribution
channels?” Schmidt adds that Alphabet has “gotten very interested in
misinformation, how misinformation works, and how it manipulates people. These
are areas that are new to me.”
Schmidt and Cohen say one of their biggest concerns going
forward is the erosion of truth. That is, as agents of disinformation become
more adept at spreading propaganda, there’s the potential that even the tech
and media-savvy will find it increasingly difficult to differentiate between
what’s real and what’s fake. “You could be interacting with a bunch of people
online, believing you’re talking to Bernie Sanders or Trump supporters, but
really, you’re talking to three guys outside of St. Petersburg,” Cohen says. “It
really oversimplifies it to just say this is a fake news problem. We talk about
it in terms of ‘digital paramilitaries.'”
Schmidt, likewise, references the dangers of “mechanized”
fake content on YouTube (say, a video of Hillary Clinton with manipulated audio
and visuals to make it sound and look as if she’s confessing to one of the many
conspiracy theories orbiting her), and warns of the increasing role bots might
play in the national discourse as their interaction skills improve. “How many
Twitter accounts are real people versus non-real people? It’d be useful to know
if the thing tweeting at and spamming you was a person or not,” he says. “And
in Facebook’s case, they’re working hard on this, but how would you know that
it was a computer that was spreading viral fake news?”
“One of the things I did not understand was that these
systems can be used to manipulate public opinion in ways that are quite
inconsistent with what we think of as democracy,” Schmidt continues. “So that’s
a really interesting problem, that Google and particularly Jigsaw should be
pursuing, whether with fake news sites or more subtle things. Just using the
Russians as an example–although plenty of other governments can do this–how
would you feel if that stuff gets stronger? Would you be worried about it?”
Artificial intelligence and machine learning will be essential to addressing
these challenges, he explains, but “it remains to be seen whether some of these
algorithms can be used to prevent bad stuff.”
In such a tense political climate, the Jigsaw team is
exceedingly hesitant to talk about President Donald Trump, out of concern they
might come across as partisan. (Schmidt is in a particularly precarious
position, having advised the Clinton team on their election tech, according to
hacked campaign emails published by Wikileaks.) Jigsaw research head Yasmin
Green–who was born in Iran and could be affected by Trump’s latest travel
ban–explains that it makes more strategic sense for the company to focus on the
larger problem. “With disinformation, we care about organized [state-sponsored]
networked propaganda. If you become consumed with the politics or the actor,
you’re really missing the opportunity,” she says. “Our mandate is to protect
people from threats, and that’s why, publicly, on the record, it’s not
appropriate for us to talk about our political leanings.”
But this week’s congressional hearings are intended to
help the government deal with more immediate digital threats, and to hold the
tech world accountable for its part in disseminating disinformation during the
previous election. Earlier this month, lawmakers introduced a bipartisan bill
that would require internet companies such as Google, Facebook, and Twitter to
disclose more information about political ads purchased on their digital
platforms. More stringent regulatory requirements may not be far behind.
When the topic of government intervention came up in my
conversations with Carpenter, he stressed that an effective regulatory solution
would require input from the private sector. “The only lever government has to
pull is policy, and the problem with policy is it takes a long time to develop
and negotiate,” Carpenter says. “If there is going to be a regulatory framework
that would come out of looking at something like fake news, how in the world
are those people [in government] going to understand what’s actually happening
on the internet if we don’t? It’s impossible. So if we can help understand it,
then we can help educate policy makers, and we can also take some prophylactic
steps beforehand so we can obviate the need for something that’s very
cumbersome.”
The stakes couldn’t be higher for Alphabet, its
subsidiaries, and competitors.
“[In the coming years,] there will be a lot more
pressure–a moral sense of obligation–on Silicon Valley to [solve] these
problems,” former Secretary of State Condoleezza Rice told Fast Company
recently. “I hope they’re willing to fix them, because the worst thing that can
happen is that the government just starts regulating things it doesn’t
understand.”
Alphabet agrees with that viewpoint. And when Google’s
information security director Richard Salgado and general counsel Kent Walker
appear at the Senate committee hearings this week, they are likely to frame
their responses around the idea that the tech companies themselves are
best-suited to resolve these problems. “We support efforts to improve
transparency, enhance disclosures, and reduce foreign abuse. We’re evaluating
steps we can take on our own platforms and will work closely with lawmakers,
the FEC, and the industry to explore the best solutions,” Faville, the Google
spokesperson, said in a statement.
“I don’t think it’s fair to ask the government to solve
all these problems–they don’t have the resources,” Schmidt told me. “The tech
industry–and I’m including Google and YouTube here–has a responsibility to get
this right and not be manipulated.”
To some observers, it’s difficult to comprehend why the
public should rely upon the same technology companies that created–if not
perpetuated–these problems to now fix them. “Our government doesn’t have the
lawyers to understand what’s going on, they can’t afford the technologists, and
they’re five steps behind, which means, fundamentally, these tech companies are
operating by themselves and are frequently telling the government, ‘Hey, here’s
what needs to be done.’ Usually, those people are just going to be caring about
their own profit motive, and that’s a horrible place to be in,” says Ian
Bremmer, founder of Eurasia Group, a political risk consultancy. “My personal
view is that I don’t have a lot of trust for a lot of these organizations.”
But others caution against engaging in too much finger
pointing on this issue. Antony Blinken, the former deputy secretary of state
and national security advisor under Obama, who now advises Alphabet and
Facebook, contends that the problem ultimately lies with Russia, not Silicon
Valley. “The [tech] platforms have to do better to defend against malicious
actors, but let’s not lose sight of the forest for the trees: the problem is
Russia and other actors who use our openness against us, not the platforms,”
Blinken says. “The biggest mistake we can make is to get into a circular firing
squad with government and the tech companies. The only winner in that scenario
is Russia.”
Comments
Post a Comment