Google Is Coming for Your Face
Google
Is Coming for Your Face
Personal data is routinely harvested from the most
vulnerable populations, without transparency, regulation, or principles—and
this should concern us all.
By Malka Older OCTOBER 14, 2019
Last week, The
New York Times reported on the federal government’s plans to
collect DNA samples from people in immigration custody, including asylum
seekers. This is an infringement of civil rights and privacy, and opens the
door to further misuse of data in the long term. There is no reason for people
in custody to consent to this collection of personal data. Nor is there any
clarity on the limits on how this data may be used in the future. The DNA
samples will go into the FBI’s criminal database, even though requesting asylum
is not a crime and entering the country illegally is only a misdemeanor. That
makes the practice not only an invasion of privacy in the present but also potentially
a way to skew statistics and arguments in debates over immigration in the
future.
The collection of
immigrant DNA is not an isolated policy. All around the world, personal data is
harvested from the most vulnerable populations, without transparency, regulation,
or principles. It’s a pattern we should all be concerned about, because it
continues right up to the user agreements we click on again and again.
In February, the World
Food Program (WFP) announced a five-year partnership with the data
analytics company Palantir Technologies. While the WFP claimed that this
partnership would help make emergency assistance to refugees and other
food-insecure populations more efficient, it was broadly criticized within the
international aid community for potential infringement of privacy. A group of
researchers and data-focused organizations, including the Engine Room, the AI
Now Institute, and DataKind, sent an open letter to the WFP, expressing their concerns
over the lack of transparency in the agreement and the potential for
de-anonymization, bias, violation of rights, and undermining of humanitarian
principles, among other issues.
Many humanitarian
agencies are struggling with how to integrate modern data collection and
analysis into their work. Improvements in data technology offer the potential
to improve processes and ease the challenges of working in chaotic, largely
informal environments (as well as appealing to donors), but they also raise
risks in terms of privacy, exposure, and the necessity of partnering with
private-sector companies that may wish to profit from access to that data.
In
August, for example, the United Nations High Commissioner for Refugees trumpeted its achievement in providing biometric
identity cards to Rohingya refugees from Myanmar in Bangladesh. What wasn’t
celebrated was the fact that refugees protested the cards both because of the way their
identities were defined—the cards did not allow the option of identifying as
Rohingya, calling them only “Myanmar nationals”—and out of concern that the
biometric data might be shared with Myanmar on repatriation, raising echoes of
the role ethnically marked identity cards played in the Rwandan genocide, among others. Writing about the
Rohingya biometrics collection in the journal Social Media + Society, Mirca
Madianou describes these initiatives as a kind of “techno-colonialism”
in which “digital innovation and data practices reproduce the power asymmetries
of humanitarianism, and…become constitutive of humanitarian crises themselves.”
Unprincipled data
collection is not limited to refugee populations. The New York Daily
News reported on Wednesday that Google has been using
temporary employees, paid through a third party, to collect facial scans of
dark-skinned people in an attempt to better balance its facial recognition
database. According to the article, temporary workers were told “to go after
people of color, conceal the fact that people’s faces were being recorded and
even lie to maximize their data collections.” Target populations included
homeless people and students. They were offered a five-dollar gift card (which
is more than refugees and immigrant detainees get for their data) but,
critically, were never informed about how the facial scans would be used,
stored, or, apparently, collected.
A Google spokesperson
told the Daily News that the data was being collected “to
build fairness into Pixel 4’s face unlock feature” in the interests of
“building an inclusive product.” Leaving aside whether contributing to the
technology of a reportedly $900 phone is worthwhile for a homeless person,
the collection of this data without formal consent or legal agreements leaves
it open to being used for any number of other purposes, such as the policing of
the homeless people who contributed it.
For
governments, coerced data collection represents a way of making these chaotic
populations visible, and therefore, in theory, controllable. These are also
groups with very little recourse for rejecting data collection, offering states
the opportunity to test out technologies of the future, like biometric identity
cards, that might eventually become nationwide initiatives. For the private
firms inevitably involved in implementing the complexities of data collection
and management, these groups represent untapped value to surveillance
capitalism, a term coined by Shoshana Zuboff to refer to the way corporations
extract profit from data analysis; for example, by tracking behavior on
Facebook or in Google searches to present targeted advertisements. In general,
refugees, asylum seekers, and homeless people give companies far less data than
the rest of us, meaning that there is still information to extract from them,
compile, and sell for profits that the contributors of the data will never see.
One concern with this
kind of unethical data sourcing means information collected for one stated goal
may be used for another: In a recent New York Times Magazine article, McKenzie Funk details how data analytics
developed during the previous administration to triage targeting toward
“felons, not families” are now being used to track all immigrants, regardless
of criminal status. Another issue is how the data is stored and protected, and
how it might be misused by other actors in the case of a breach. A major
concern for the Rohingya refugees was what might happen to them if their
biometric data fell into the hands of the very groups that attacked them for
their identity.
Both of these concerns
should sound familiar to all of us. It seems like we hear about new data
breaches on a daily basis, offering up the medical records, Social Security
numbers, and shopping history of millions of customers to hackers and scammers.
But even without insecurities, our data is routinely vacuumed up through our
cell phones, browsers, and interactions with state bureaucracy (e.g., driver’s
licenses)—and misused in immoral, illegal, or dangerous ways.
Facebook has been forced to admit again and again that it has been sharing the
detailed information it gets from tracking its users with third parties,
ranging from apps to advertisers to firms attempting to influence the political
sphere, like Cambridge Analytica. Apple has been accused of similar misuse.
Refugees or detained
asylum seekers have less choice than most people to opt out of certain terms of
service. But these coercive mechanisms affect us all. Getting a five-dollar
gift card (not even cash!) may seem like a low price for which to sell a scan
of your face, but it isn’t so different from what happens when we willingly
click “I Agree” on those terms-of-service boxes. Even if we’re wary of the way
our data is being used, it’s getting harder and harder to avoid giving it out.
As our digital identities become increasingly entangled with functions like
credit reporting, paying bills, and buying insurance, avoiding the big tech
companies becomes more and more difficult. But when we opt in, we do so on the
company’s terms—not our own. User agreements and privacy policies are notoriously difficult for even
experts to understand, and a new Pew Research study showed that most US citizens
are short on digital knowledge and particularly lacking in understanding of
privacy and cybersecurity.
Like the subjects of
Google’s unethical facial scans and the recipients of biometric identity cards
in refugee camps, we have little control over how the data is used once we’ve
given it up, and no meaningful metric for deciding when giving up our
information becomes a worthwhile trade-off. We should be shocked by how
companies and governments are abusing the data and privacy rights of the most
vulnerable groups and individuals. But we should also recognize that it’s not
so different from the compromises we are all routinely asked to make ourselves.
Comments
Post a Comment