After Facebook scrutiny, is Google next?
After Facebook scrutiny, is Google next?
By RYAN NAKASHIMA AND MATT O'BRIEN Apr 21, 2018, 1:20 PM
ET
MENLO PARK, Calif. — Facebook has taken the lion's share
of scrutiny from Congress and the media about data-handling practices that
allow savvy marketers and political agents to target specific audiences, but
it's far from alone. YouTube, Google and Twitter also have giant platforms
awash in more videos, posts and pages than any set of human eyes could ever
check. Their methods of serving ads against this sea of content may come under
the microscope next.
Advertising and privacy experts say a backlash is
inevitable against a "Wild West" internet that has escaped scrutiny
before. There continues to be a steady barrage of new examples where
unsuspecting advertisers had their brands associated with extremist content on
major platforms.
In the latest discovery, CNN reported that it found more
than 300 retail brands, government agencies and technology companies had their
ads run on YouTube channels that promoted white nationalists, Nazis, conspiracy
theories and North Korean propaganda.
Child advocates have also raised alarms about the ease
with which smartphone-equipped children are exposed to inappropriate videos and
deceptive advertising.
"I absolutely think that Google is next and long
overdue," said Josh Golin, director of the Boston-based Campaign for a
Commercial-Free Childhood, which asked the Federal Trade Commission to
investigate Google-owned YouTube's advertising and data collection practices
earlier this month.
YouTube has repeatedly outlined the ways it attempts to
flag and delete hateful, violent, sexually explicit or harmful videos, but its
screening efforts have often missed the mark.
It also allows advertisers avoid running ads on sensitive
content — like news or politics — that don't violate YouTube guidelines but
don't fit with a company's brand. Those methods appear to have failed.
"YouTube has once again failed to correctly filter
channels out of our marketing buys," said a statement Friday from 20th
Century Fox Film, which learned that its ads were running on videos posted by a
self-described Nazi. YouTube has since deleted the offending channel, but the
Hollywood firm says it has unanswered questions about how it happened in the
first place.
"All of our filters were in place in order to ensure
that this did not happen," Fox said, adding it has asked for a refund of
any money shared with the "abhorrent channel."
YouTube said Friday that it has made "significant
changes to how we approach monetization" with "stricter policies,
better controls and greater transparency" and said it allows advertisers
to exclude certain channels from ads. It also removes ads when it's notified of
problems running beside content that doesn't comply with its policies. "We
are committed to working with our advertisers and getting this right."
So far, just one major advertiser — Baltimore-based
retailer Under Armour — had said it had withdrawn its advertising in the wake
of the CNN report, though the lull lasted only a few days last week when it was
first notified of the problem. After its shoe commercial turned up on a channel
known for espousing white nationalist beliefs, Under Armour worked with YouTube
to expand its filters to exclude certain topics and keywords.
On the other hand, Procter & Gamble, which had kept
its ads off of YouTube since March 2017, said it had come back to the platform
but drastically pared back the channels it would advertise on to under 10,000.
It has worked on its own, with third parties, and with YouTube to create its
restrictive list.
That's just a fraction of the some 3 million YouTube
channels in the U.S. that accept ads, and is even more stringent than YouTube's
"Google Preferred" lineup that focuses on the top most popular 5
percent of videos.
The CNN report was "an illustration of exactly why
we needed to go above and beyond just what YouTube's plans were and why we
needed to take more control of where our ads were showing up," said
P&G spokeswoman Tressie Rose.
The big problem, experts say, is that advertisers lured
by the reach and targeting capability of online platforms can mistakenly expect
the same standards for decency on network TV will apply online. In the same
way, broadcast TV rules that require transparency about political ad buyers are
absent on the web.
"There have always been regulations regarding
appropriate conduct in content," says Robert Passikoff, president of Brand
Keys Inc., a New York customer research firm. Regulating content on the
internet is one area "that has gotten away from everyone."
Also absent from the internet are many of the rules that
govern children's programming on television sets. TV networks, for instance,
are allowed to air commercial breaks but cannot use kids' characters to
advertise products. Such "host-selling" runs rampant on internet
services such as YouTube.
Action to remove ads from inappropriate content is mostly
reactive because of lack of upfront control of what gets uploaded, and it
generally takes the mass threat of boycott to get advertisers to demand
changes, according to BrandSimple consultant Allen Adamson. "The social
media backlash is what you're worried about," he said.
At the same time, politicians are having trouble keeping
up with the changing landscape, evident by how ill-informed many senators and
congresspeople appeared during questioning of Facebook CEO Mark Zuckerberg
earlier this month.
"We're in the early stages of trying to figure out
what kind of regulation makes sense here," said Larry Chiagouris,
professor of marketing at Pace University in New York. "It's going to take
quite some time to sort that out."
O'Brien reported from Brookline, Massachusetts.
Comments
Post a Comment