The deeper danger of deepfakes: Worry less about politicians and more about powerless people


The deeper danger of deepfakes: Worry less about politicians and more about powerless people

SEP 20, 2019 | 5:00 AM

In the last year, news coverage of deepfakes — counterfeit videos that make it seem as though people are doing or saying something they actually aren’t doing or saying — has positioned them as dangerous to democracy.


Deepfake technology, which leverages deep or machine learning to hybridize or generate human bodies and faces, can be easily used by amateurs to create and distribute fake videos of politicians and those in power. There is a concern that during an election season, nefarious actors will circulate faked videos of politicians doing and saying things that never happened, and that these forgeries will drive the public’s choices at the polls.
Scary? Dystopian? Potentially.
But most of the discussion around deepfakes ignores the fact that disinformation already circulates widely using relatively unsophisticated tactics. Technological fixes like labeling faked content have proven insufficient for stopping disinformation, leaving platforms, policy-makers and media literacy specialists with few tools to combat the simple tactics already being used to topple governance structures and target vulnerable groups of people with violence.
While there is plenty of panic around deepfakes intervening in electoral politics, what is far too little discussed are the ways in which faked images and videos are already wielded as weapons against women, people of color and those questioning powerful systems.
These cases don’t receive as much media attention because they do not target people in high positions of power. As my report “Deepfakes and Cheap Fakes” (co-written with Joan Donovan) shows, the people who are most vulnerable to being targeted by deepfakes are those without the means to control what counts as evidence about them.
At present, the chief concern about deepfakes centers around amateurs using FakeApp in conjunction with consumer-grade software like Adobe After Effects to create forged pornography. If one has the stomach for it, a search of any major pornography site reveals a surprising number of deepfaked pornography depicting both celebrities and everyday people.
For instance, in 2016, a 17-year-old Australian woman, Noelle Martin, found her face photoshopped onto pornographic images circulating on the internet, making her the target of abuse and harassment from strangers. Before she even finished high school, she feared her future job prospects would be quashed.
Martin’s case is not unique. Image-based sexual abuse is used as a tool to inflict other types of harms, such as the suppression of important, but sometimes overlooked, voices, from different arenas: the press, civil society and political opposition. Many of the most public examples of this have been directed toward female politicians or activists, often by simply using similar-looking actors to depict them in sexualized or pornographic footage.
In the Philippines in 2016, the legal counsel of President Rodrigo Duterte used a faked sex video of Sen. Leila de Lima as evidence to justify her imprisonment. Similar tactics have been used to blackmail female journalists who call out abuses of power, as in the case of Indian journalist Rana Ayyub.
Courts may rule that faked pornographic images are defamatory in courts and could even order that these images be taken down, but victims need money and time to hire a lawyer and bring a case. Moreover, even if a takedown is granted by the courts, it’s difficult to take down an image across all platforms. In the current context, legal protections are extended to individuals who manipulate images or videos, despite the harm these manipulators inflict. Legal scholars have begun to push back, arguing that a publicly shared photo becomes a private image if it has been edited to depict the subject in a sexualized way.
There are thousands of images of many of us online, in the cloud and on our devices. This makes anyone with a public social media profile fair game to be faked.
Tech companies should protect their users by developing more rigorous protocols for taking down and/or labeling false or defamatory content. Platform intermediaries should be held legally accountable for spreading defamatory content.
We need to raise our guard today lest more vulnerable people are victimized by deepfakes tomorrow.
Paris is an affiliate at Data & Society and co-author of the report “Deepfakes and Cheap Fakes.”

Comments

Popular posts from this blog

Report: World’s 1st remote brain surgery via 5G network performed in China

Visualizing The Power Of The World's Supercomputers

BMW traps alleged thief by remotely locking him in car