Ex-Facebook exec: don't believe what they tell you about ads
I'm an ex-Facebook exec: don't believe what they tell you
about ads
I believe the social media giant could target ads at
depressed teens and countless other demographics. But so what?
‘This oversteps a boundary’: teenagers perturbed by
Facebook surveillance
facebook
‘The question is not whether this can be done. It is
whether Facebook should apply a moral filter to these decisions.’
By Antonio Garcia-Martinez Tuesday 2 May 2017 12.25 EDT
Last modified on Tuesday 2 May 2017 16.09 EDT
For two years I was charged with turning Facebook data
into money, by any legal means. If you browse the internet or buy items in
physical stores, and then see ads related to those purchases on Facebook, blame
me. I helped create the first versions of that, way back in 2012.
The ethics of Facebook’s micro-targeted advertising was
thrust into the spotlight this week by a report out of Australia. The article,
based on a leaked presentation, said that Facebook was able to identify
teenagers at their most vulnerable, including when they feel “insecure”,
“worthless”, “defeated” and “stressed”.
Facebook claimed the report was misleading, assuring the
public that the company does not “offer tools to target people based on their
emotional state”. If the intention of Facebook’s public relations spin is to
give the impression that such targeting is not even possible on their platform,
I’m here to tell you I believe they’re lying through their teeth.
Just as Mark Zuckerberg was being disingenuous (to put it
mildly) when, in the wake of Donald Trump’s unexpected victory, he expressed
doubt that Facebook could have flipped the presidential election.
Facebook deploys a political advertising sales team,
specialized by political party, and charged with convincing deep-pocketed
politicians that they do have the kind of influence needed to alter the outcome
of elections.
I was at Facebook in 2012, during the previous
presidential race. The fact that Facebook could easily throw the election by
selectively showing a Get Out the Vote reminder in certain counties of a swing
state, for example, was a running joke.
Converting Facebook data into money is harder than it sounds,
mostly because the vast bulk of your user data is worthless. Turns out your
blotto-drunk party pics and flirty co-worker messages have no commercial value
whatsoever.
But occasionally, if used very cleverly, with lots of
machine-learning iteration and systematic trial-and-error, the canny marketer
can find just the right admixture of age, geography, time of day, and music or
film tastes that demarcate a demographic winner of an audience. The
“clickthrough rate”, to use the advertiser’s parlance, doesn’t lie.
Without seeing the leaked documents, which were
reportedly based around a pitch Facebook made to a bank, it is impossible to
know precisely what the platform was offering advertisers. There’s nothing in
the trade I know of that targets ads at emotions. But Facebook has and does
offer “psychometric”-type targeting, where the goal is to define a subset of
the marketing audience that an advertiser thinks is particularly susceptible to
their message.
And knowing the Facebook sales playbook, I cannot imagine
the company would have concocted such a pitch about teenage emotions without
the final hook: “and this is how you execute this on the Facebook ads
platform”. Why else would they be making the pitch?
The question is not whether this can be done. It is
whether Facebook should apply a moral filter to these decisions. Let’s assume
Facebook does target ads at depressed teens. My reaction? So what. Sometimes
data behaves unethically.
I’ll illustrate with an anecdote from my Facebook days.
Someone on the data science team had cooked up a new tool that recommended
Facebook Pages users should like. And what did this tool start spitting out?
Every ethnic stereotype you can imagine. We killed the tool when it recommended
then president Obama if a user had “liked” rapper Jay Z. While that was a
statistical fact – people who liked Jay Z were more likely to like Obama – it
was one of the statistical truths Facebook couldn’t be seen espousing.
I disagreed. Jay Z is a millionaire music tycoon, so what
if we associate him with the president? In our current world, there’s a long
list of Truths That Cannot Be Stated Publicly, even though there’s plenty of
data suggesting their correctness, and this was one of them.
African Americans living in postal codes with depressed
incomes likely do respond disproportionately to ads for usurious “payday”
loans. Hispanics between the ages of 18 and 25 probably do engage with ads
singing the charms and advantages of military service.
Why should those examples of targeting be viewed as any
less ethical than, say, ads selling $100 Lululemon yoga pants targeting
thirtysomething women in affluent postal codes like San Francisco’s Marina
district?
The hard reality is that Facebook will never try to limit
such use of their data unless the public uproar reaches such a crescendo as to
be un-mutable. Which is what happened with Trump and the “fake news”
accusation: even the implacable Zuck had to give in and introduce some
anti-fake news technology. But they’ll slip that trap as soon as they can. And
why shouldn’t they? At least in the case of ads, the data and the clickthrough
rates are on their side.
Antonio Garcia-Martinez was a Facebook product manager
(2011-2013) and is the author of Chaos Monkeys: Obscene Fortune and Random
Failure in Silicon Valley.
Comments
Post a Comment