Deep fake technology outpacing security countermeasures... Grave Threat


Deep fake technology outpacing security countermeasures

 

Dec 11, 2018 | Anthony Kimery

In July, Sen. Marco Rubio appeared to be a lone cry in the dark when he declared in remarks he made at the Heritage Foundation that “Deep Fake” technology “which manipulates audio and video of real people saying or doing things they never said or did” poses a serious menace to national security.

Indeed. This is an industry that’s growing rapidly; far out-pacing its national security implications and the development of biometric de-facializing countermeasures, which some authorities told Biometric Update is “very likely” to become a new biometric industry technology off-shoot.

Indeed. According to a report published by Markets and Markets in 2017, the global facial recognition market was estimated at $3.37 billion in 2016, and is expected to grow up to $7.76 billion by 2022, with an annual growth rate of 13.9 percent. But this could be stunted by the growth in the biometric deception technology market as the existing biometric industry may be forced to work on developing de-facializing countermeasures, industry authorities said.

Rubio warned, “I believe that this is the next wave of attacks against America and Western democracies … the ability to produce fake videos that … can only be determined to be fake after extensive analytical analysis,” which, going forward, poses a formidable threat, given that the Pew Research Center found that, according to Bobby Chesney, the Charles I. Francis Professor in Law and Associate Dean for Academic Affairs at the University of Texas School of Law and Director of UT-Austin’s Robert S. Strauss Center for International Security and Law; and Danielle Citron, the Morton & Sophia Macht Professor of Law at the University of Maryland Carey School of Law and author of Hate Crimes in Cyberspace, “As of August 2017, two-thirds of Americans [(67 percent] reported … that they get their news at least in part from social media. This is fertile ground for circulating deep fake content. Indeed, the more salacious, the better.”

It’s a menace that certainly has quietly sparked a gold rush among the biometrics industry to begin working on developing “de-facializing” technologies to thwart the already lucrative deep fake technology business.

According to intelligence and military officials, the inability to biometrically de-facialize deep fakes is indeed rapidly becoming such a dangerous concern; so much so that the Department of Defense (DOD) and Intelligence Community (IC) components are beginning to vigorously work on de-facializing biometric countermeasures. For example, the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), has been sponsoring proof-of-concept research programs targeting development of facial biometric “de-identification” technologies.

Meanwhile, the Pentagon’s Defense Advanced Research Projects Agency (DARPA) has funded a Media Forensics project that’s been tasked to explore and develop technologies to automatically weed out deep fake videos and digital media.

The 2017, “threatcasting report,” The New Dogs of War: The Future of Weaponized Artificial Intelligence, by the Army Cyber Institute at West Point and Arizona State University’s Threatcasting Lab, had also warned that, “The clearest and most apparent threat that emerged from the workshop raw data was a unique way in which AI could be weaponized. Surveillance and coercion are not new threats, but when conducted with the speed, power, and reach of AI, the danger is newly amplified.”

Continuing, the report stated, “The goal of the adversary would depend on the nature of the threat actor (criminal, terrorist, state sponsored). Regardless, the weaponization of AI to surveil and coerce individuals is a powerful emerging threat. As a developing platform for psychological, physical, or systemic infiltration, AI is quickly becoming the realization of a modern dog of war, unleashing the worst of humanity and our technology onto ourselves,” adding, “Although clearly more research is needed, it is imperative to take immediate pragmatic steps to lessen the destabilizing impacts of nefarious AI actors. If we are better able to understand and articulate possible threats and their impacts to the American population, economy, and livelihood, then we can begin to guard against them while crafting a counter-narrative.”

The problem is somber

“The West is ill-prepared for the wave of ‘deep fakes’ that AI could unleash,” and, “As long as tech research and counter-disinformation efforts run on parallel, disconnected tracks, little progress will be made in getting ahead of [these] emerging threats,” recently wrote Brookings Institution’s Chris Meserole, Fellow, Foreign Policy, Center for Middle East Policy; and, Alina Polyakova, David M. Rubenstein Fellow, Foreign Policy, Center on the United States and Europe.

“Thanks to bigger data, better algorithms, and custom hardware, in the coming years, individuals around the world will increasingly have access to cutting-edge artificial intelligence,” which when combined with “deep learning and generative adversarial networks, [will make] it possible to doctor images and video so well that it’s difficult to distinguish manipulated files from authentic ones. And thanks to apps like FakeApp and Lyrebird, these so-called ‘deep fakes’ can now be produced by anyone with a computer or smartphone.”

In October, in their paper, Disinformation on Steroids: The Threat of Deep Fakes, published by the Council on Foreign Relations, Chesney and Citron worrisomely presaged that, “Disinformation and distrust online are set to take a turn for the worse. Rapid advances in deep learning algorithms to synthesize video and audio content have made possible the production of ‘deep fakes’ — highly realistic and difficult-to-detect depictions of real people doing or saying things they never said or did. As this technology spreads, the ability to produce bogus, yet credible video and audio content will come within the reach of an ever larger array of governments, nonstate actors, and individuals,” and, “as a result, the ability to advance lies using hyperrealistic, fake evidence is poised for a great leap forward.”

Chesney and Citron forewarned that, “The array of potential harms that deep fakes could entail is stunning,” explaining that, “A well-timed and thoughtfully scripted deep fake or series of deep fakes could tip an election, spark violence in a city primed for civil unrest, bolster insurgent narratives about an enemy’s supposed atrocities, or exacerbate political divisions in a society. The opportunities for the sabotage of rivals are legion — for example, sinking a trade deal by slipping to a foreign leader a deep fake purporting to reveal the insulting true beliefs or intentions of US officials.”

“Consider these terrifying possibilities,” they posited:

        Fake videos could feature public officials taking bribes, uttering racial epithets, or engaging in adultery;
• Politicians and other government officials could appear in locations where they were not, saying or doing horrific things that they did not;
• Fake videos could place them in meetings with spies or criminals, launching public outrage, criminal investigations, or both;
• Soldiers could be shown murdering innocent civilians in a war zone, precipitating waves of violence and even strategic harms to a war effort;
• A deep fake might falsely depict a white police officer shooting an unarmed black man while shouting racial epithets;
• A fake audio clip might “reveal” criminal behavior by a candidate on the eve of an election;
• A fake video might portray an Israeli official doing or saying something so inflammatory as to cause riots in neighboring countries, potentially disrupting diplomatic ties or even motivating a wave of violence;
• False audio might convincingly depict US officials privately “admitting” a plan to commit this or that outrage overseas, exquisitely timed to disrupt an important diplomatic initiative; and,
• A fake video might depict emergency officials “announcing” an impending missile strike on Los Angeles or an emergent pandemic in New York, provoking panic and worse.

“Note that these examples all emphasize how a well-executed and well-timed deep fake might generate significant harm in a particular instance, whether the damage is to physical property and life in the wake of social unrest or panic or to the integrity of an election,” they wrote, ominously noting that, “The threat posed by deep fakes … has a long-term, systemic dimension.”

“The looming era of deep fakes will be different, however, because the capacity to create hyperrealistic, difficult-to-debunk fake video and audio content will spread far and wide,” they warn, pointing out that. “Advances in machine learning are driving this change. Most notably, academic researchers have developed generative adversarial networks that pit algorithms against one another to create synthetic data (i.e., the fake) that is nearly identical to its training data (i.e., real audio or video). Similar work is likely taking place in various classified settings, but the technology is developing at least partially in full public view with the involvement of commercial providers. Some degree of credible fakery is already within the reach of leading intelligence agencies, but in the coming age of deep fakes, anyone will be able to play the game at a dangerously high level. In such an environment, it would take little sophistication and resources to produce havoc. Not long from now, robust tools of this kind and for-hire services to implement them will be cheaply available to anyone.”

Rubio again highlighted the issue of the potential exploitation of deep fake imagery in elections at a Senate Select Committee on Intelligence hearing in May to consider the nomination of William R. Evanina to be Director of the National Counterintelligence and Security Center (NCSC) in the Office of the Director of National Intelligence (ODNI), he raised the threat of deep fakes and how they could be used to cause chaos in the electoral system.

During hearing, Rubio raised the issue of deep fakes, saying, “I want to talk about a separate topic that I don’t believe has ever been discussed before … certainly not today.” He asked Evanina if he was familiar with the term, “deep fakes.”

Surprisingly, for a veteran intelligence official who was chief of the CIA’s Counterespionage Group after a career in the FBI heading the Bureau’s National Security Branch and Counterintelligence Division, and as a Supervisory Special Agent in the new Joint Terrorism Task Force, Evanina responded, “I’m not, sir.” Today, Evanina is the executive officer of the US Office of the National Counterintelligence Executive (ONCIX) and director of NCSC.

Rubio educated Evanina, explaining to him that, “A deep fake is the ability to manipulate sound images or video to make it appear that a certain person did something that they didn’t do. These videos, in fact, are increasingly realistic. The quality of these fakes is rapidly increasing due to artificial intelligence [AI] machine learning algorithms paired with facial mapping software [that makes] it easy and cheap to insert someone’s face into a video and produce a very realistic-looking video of someone saying or doing something they never said or did. This, by the way, technology is pretty widely available on the Internet, and people have used it already for all sorts of nefarious purposes at the individual level. I think you can only imagine what a nation-state could do with that technology, particularly to our politics.”

So, Rubio again asked, “You’ve never heard of that term?” before asking, “is there any work being done anywhere in the US government to begin to confront the threat that could be posed — that will be posed in my view by the ability to produce realistic looking, fake video and audio that could be used to cause all sorts of chaos and in our country?”

Seeming to backstep, Evanina replied, “The answer is yes … the Intelligence Community and federal law enforcement is actively working to not only understand the complexities and capabilities of adversaries, but what from a predictive analysis perspective we may face going forward …”

Rubio went on to say he suspected “that 99 percent of the American population doesn’t know what it is, even though, frankly, for years, they’ve been watching deep fakes in science fiction movies and the like, in which these incredible special effects are as realistic as they’ve ever been thanks to the talent of the people. But never before have we sort of seen that capability become so apparent, or so available, right off the shelf.”

“And, then, [when], you look at [the] sort of trends that we’ve seen in the 21st century, the weaponization of information … let me just say there’s always been propaganda in the world and information has always been a powerful tool to use against a competitor or an adversary,” Rubio said,” adding, but, “What we’ve never had in human history, is the ability to disseminate information so rapidly, so instantaneously, for it to have an impact on so many people before you’re capable of reacting to it,” and “the vast majority of people watching that image on television are going to believe it. And if that happens two days before an election, or the night before an election, it could influence the outcome of your race.”

Rubio ominously declared that, “The ability to influence the outcome by putting out a video of a candidate on the eve before the election doing or saying something, strategically placed, strategically altered, in such a way to drive some narrative, [it] could flip enough votes in the right place to cost someone an election … and what you have is not a threat to our elections, but a threat to our republic, a Constitutional crisis unlike any we have ever faced in the modern history of this country.”

More recently, Rubio received bipartisan support, when, on September 3, Reps. Adam Schiff (D-Calif.), Stephanie Murphy (D-Fla.), and Carlos Curbelo (R-Fla.), sent a letter to Director of National Intelligence (DNI) Dan Coats requesting the Intelligence Community (IC) assess the national security threats posed by “deep fake” technology, and to have a report prepared for “Congress and the public about the implications of new technologies that allow malicious actors to fabricate audio, video, and still images” by December 14.

“Hyper-realistic digital forgeries — popularly referred to as ‘deep fakes’ — use sophisticated machine learning techniques to produce convincing depictions of individuals doing or saying things they never did, without their consent or knowledge,” they legislators stated in their letter to Coats. “By blurring the line between fact and fiction, deep fake technology could undermine public trust in recorded images and videos as objective depictions of reality.”

The bipartisan letter pointed out that, “You have repeatedly raised the alarm about disinformation campaigns in our elections and other efforts to exacerbate political and social divisions in our society to weaken our nation. We are deeply concerned that deep fake technology could soon be deployed by malicious foreign actors,” going on to explain that, “Forged videos, images, or audio could be used to target individuals for blackmail or for other nefarious purposes. Of greater concern for national security, they could also be used by foreign or domestic actors to spread misinformation. As deep fake technology becomes more advanced and more accessible, it could pose a threat to United States public discourse and national security, with broad and concerning implications for offensive active measures campaigns targeting the United States.”

Thus, they said, “Given the significant implications of these technologies and their rapid advancement, we believe that a thorough review by the Intelligence Community is appropriate, including an assessment of possible counter-measures and recommendations to Congress. Therefore, we request that you consult with the heads of the appropriate elements of the Intelligence Community to prepare a report to Congress, including an unclassified version.”

“By blurring the line between fact and fiction, deep fake technology could undermine public trust in recorded images and videos as objective depictions of reality,” and “could become a potent tool for hostile powers seeking to spread misinformation. The first step to help prepare the Intelligence Community, and the nation, to respond effectively, is to understand all we can about this emerging technology, and what steps we can take to protect ourselves,” Schiff said in a statement. “It’s my hope that the DNI will quickly work to get this information to Congress to ensure that we are able to make informed public policy decisions.”

“We need to know what countries have used it against US interests, what the US government is doing to address this national security threat, and what more the Intelligence Community needs to effectively counter the threat,” said Murphy, also a member of the House Committee on Armed Services.

Curbelo added that, “Deep fakes have the potential to disrupt every facet of our society and trigger dangerous international and domestic consequences. With implications for national security, human rights, and public safety, the technological capabilities to produce this kind of propaganda targeting the United States and Americans around the world is unprecedented.”

In his article, Researchers Wager on a Possible Deepfake Video Scandal During the 2018 US Midterm Elections, Jeremy Hsu wrote that, “A quiet wager [was] taken … among researchers who study artificial intelligence techniques and the societal impacts of such technologies. They’re betting whether or not someone will create a so-called deep fake video about a political candidate that receives more than 2 million views before getting debunked by the end of 2018.”

“This is, a serious … very serious … national security concern, for a whole lot of obvious reasons,” a senior US intelligence official involved in working on counter-GAN technologies, told Biometric Update on background. “Let’s image a politically adversarial nation uses this technology to, say, create a fake video that appears to put US officials or politicians in compromising situations, then leaks it? What do you think will happen? It’ll go viral, that’s what’ll happen … causing domestic and international chaos, depending what the compromise is. We need de-facializing counter-measures in place to immediately determine – biometrically — whether the target individuals’ images are doctored, or biometrically whether they are real. You can see the disquieting problem and the challenge is.”

“Hiding, concealing, or replacing the faces – the identities – of real targets of interest to intelligence officials trying to connect potential terrorists or terrorist cells they’ve flagged by datamining Facebook, Twitter, YouTube, and other social media photos … well, it’s a very disturbing vital new intelligence problem we need to quickly develop quick and effective counter-technologies to overcome these biometric concealing problems,” a counterterrorism analyst told Biometric Update on background.

“Imagine producing a video that has me or Sen. [Mark] Warner saying something we never said on the eve of an election. By the time I prove that video is fake — even though it looks real — it’s too late,” Rubio warned.

Continuing, he emphasized his concern: “If we could imagine for a moment, a foreign intelligence agency could use deep fakes to produce a fake video of an American politician using a racial epithet or taking a bribe or anything of that nature. They could use a fake video of a US soldier massacring civilians overseas, they could use a fake video of a US official admitting a secret plan to do some conspiracy theory of some kind, they could use a fake video of a prominent official discussing some sort of impending disaster that could [cause] panic. And imagine a compelling video like this produced on the eve of an election, or a few days before a major public policy decision with a culture that’s already — has already a kind of a built-in bias towards believing outrageous things; a media that is quick to promulgate it and spread it. And, of course, the social media where you can’t stop its spread.”

Similarly, Chesney and Citron noted that, “In a recent report, the Belfer Center highlighted the national security implications of sophisticated forgeries. For instance, an adversary could acquire real (and sensitive) documents through cyber-espionage and leak the real documents along with forgeries supported by “leaked” forged audio and video. In similar fashion, public trust may be shaken, no matter how credible the government’s rebuttal of the fake videos. Making matters worse, news organizations may be chilled from rapidly reporting real, disturbing events for fear that the evidence of them will turn out to be fake (and one can well-imagine someone trying to trap a news organization in exactly this way).

The problem is spreading – and quickly

The neck-breaking speed in groundbreaking advancements in Generative Adversarial Networks (GANs), the technology that makes it increasingly “easier to create natural, and legitimate looking images from scratch,” explained researchers, Shahroz Tariq, Sangyup Lee, Hoyoung Kim, Youjin Shin, and Simon S. Woo, all at The State University of New York, Korea (SUNY-Korea), “is a great feat, [but], it comes with major security problems, such as using synthetic photos for identification and authentication applications,” they warned in their recently published paper, Detecting Both Machine and Human Created Fake Face Images In the Wild, in which they stated their research is showing “promising results in detecting GAN generated images with high accuracy.”

GANs involve two separate networks, one that generates the imagery based on the data that’s inputted into it, and a second discriminator network (the adversary) that assesses if they’re real.

But new GANs and GANs-type technologies continue to be created – including for cinema — for faking both facial and body gestures – as well as aging and de-aging a person — to render them on video as a very convincing doppelgänger of the target person, with the ability to fool biometric readers and facial datamining. For example, in their paper, Face2Face: Real-time Face Capture and Reenactment of RGB Videos, Justus Thies and Matthias Niebner, Technical University of Munich; Michael Zollhöfer, Stanford University; Marc Stamminger, University of Erlangen-Nuremberg; Christian Theobalt, Max Planck Institute for Informatics, working at Visual Computing Group (VCG) Visual Computing Lab at Technical University Munich.

“Face2Face is a real-time face tracker whose analysis-by-synthesis approach precisely fits a 3D face model to a captured RGB video,” VCG says on its website. “This produces high accuracy tracking, allowing for photo-realistic re-rendering and modifications of a target video.” In other words, “in a nutshell,” VCG says, “one can change the expressions of a target video in real time. This project has received incredible attention with several million YouTube views and wide range of media coverage. We even gave live demos on several occasions on public television!”

As an example, see the startling before and after VFX for the Netflix series, “The Man in the High Castle,” and the “de-aging” technology used on ten actors.

Researchers Tero Karras, Timo Aila, and Samuli Laine at NVIDIA, and Jaakko Lehtinen at NVIDIA and Aalto University, said in their recent paper, Progressive Growing of GANs for Improved Quality, Stability, and Variation, which was presented as a conference paper at International Conference on Machine Learning 2018, described “a new training methodology” for GANs.

“The key idea,” they stated, “is to grow both the generator and discriminator progressively [by] starting from a low resolution” and adding “new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes it, allowing” production of “images of unprecedented quality, e.g., CELEBA images at 10242.” They also proposed “a simple way to increase the variation in generated images, and achieve a record inception score of 8:80 in unsupervised CIFAR 10.”

Collected by Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton, CIFAR-10 is a dataset of 60,000 32×32 color images in ten classes, with 6,000 images per class, and 50,000 training images and 10,000 test images. The dataset is divided into five training batches and one test batch, each with 10,000 images. According to the CIFAR-10 website, “The test batch contains exactly 1,000 randomly-selected images from each class. The training batches contain the remaining images in random order, but some training batches may contain more images from one class than another. Between them, the training batches contain exactly 5,000 images from each class.”

“CIFAR data sets are one of the most well-known data sets in computer vision tasks,” said Furkan Kınli, a Business Intelligence Intern at Turk Telekom.

In addition, they described “several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, they “suggest[ed] a new metric for evaluating GAN results, both in terms of image quality and variation.” And, as “an additional contribution,” they constructed “a higher-quality version of the CELEBA dataset.

“Avoiding automatic face detection and recognition is becoming more difficult as talented engineers search for ways to improve these systems. Real-world countermeasures such as the privacy visor or CV Dazzle are not always effective and may even make the subject more recognizable in the real world,” explained Michael J. Wilber, Vitaly Shmatikov, and Serge Belongie, Department of Computer Science, Cornell University, in their research paper, Can We Still Avoid Automatic Face Detection?

They reported that their studies of “image transformation techniques that help a privacy-conscious individual avoid being automatically identified” pose “several practical problems with the methods we outline. First, the photo uploader, not the individual, must remember to use the image perturbation techniques. Second, many of these techniques make the image look worse to humans. However, these techniques illuminate the strengths and weaknesses of state-of-the-art face detectors used in common social network platforms. We now know that Facebook has little trouble detecting faces in low-light conditions, but occlusions and noise are still difficult to find. If a privacy-seeking individual wishes to develop more ways of avoiding automatic detection, building from these observations could be a good first start.”

What’s ironic about the sudden concern about deep fakes and countermeasures is that a little more than a decade ago, research was supported by the Laboratory for International Data Privacy at Carnegie Mellon University and DARPA — managed by the Naval Sea Systems Command — “to enable the sharing of video data with scientific assurances of privacy protection while keeping the data practically useful,” wrote the authors of the resulting research paper, Preserving Privacy by De-identifying Facial Images.

“What is needed,” they said, “is an algorithm to de-identify faces in video data such that many facial characteristics remain, yet face recognition software cannot reliably identify subjects whose images are captured in the data. This work formally introduces the ‘preserved face de-identification’ problem, in which face recognition software is restricted and details remaining in the face are minimally distorted. Sharing only de-identified data restores the current expectation of privacy so that society does not have to choose safety over privacy, but society can have both safety and privacy.”

Shortly after 9/11, little known research was supported in part by the Laboratory for International Data Privacy at Carnegie Mellon University and DARPA – also managed by the Naval Sea Systems Command – which resulted in the research paper, Preserving Privacy by De-identifying Facial Images, by Elaine Newton, Latanya Sweeney, and Bradley Malin, at the Carnegie Mellon University School of Computer Science.

They concluded that, “In the context of sharing video surveillance data, a significant threat to privacy is face recognition software, which can automatically identify known people, such as from a database of drivers’ license photos, and thereby track people regardless of suspicion. This paper introduces an algorithm to protect the privacy of individuals in video surveillance data by de-identifying faces such that many facial characteristics remain but the face cannot be reliably recognized. A trivial solution to de-identifying faces involves blacking out each face. This thwarts any possible face recognition, but because all facial details are obscured, the result is of limited use. Many ad hoc attempts, such as covering eyes or randomly perturbing image pixels, fail to thwart face recognition because of the robustness of face recognition methods. This paper presents a new privacy-enabling algorithm, named, k-Same, that scientifically limits the ability of face recognition software to reliably recognize faces while maintaining facial details in the images.

The algorithm determines similarity between faces based on a distance metric and creates new faces by averaging image components, which may be the original image pixels …”

Combating the problem

“These images,” Tariq, Lee, Kim, Shin, and Woo said, “are extremely challenging for normal people to tell whether [they are] real human faces or machine generated faces. And GANs can be possibly misused or abused to hurt people similar to deep fake,” which is an artificial intelligence-based machine learning GAN used for human image synthesis that’s increasingly being used to combine and superimpose existing images and videos onto source images or videos, which are of particular concern to homeland and national security agencies, as these fake videos can – and are — being posted on social media by terrorists, Transnational Criminal Organizations, and rogue states as part of disinformation programs. They can also be created and viralized on social media by political ideologists to plant “fake news” to influence social political opinions and beliefs.

Indeed, deep fakes can – and have — been used to create fake news and malicious hoaxes.

Tariq, Lee, Kim, Shin, and Woo echoed Rubio and others’ concern in their research paper, saying that, “Due to the significant advancements in image processing and machine learning algorithms, it is much easier to create, edit, and produce high quality images. However, attackers can maliciously use these tools to create legitimate looking but fake images to harm others, bypass image detection algorithms, or fool image recognition classifiers.”

Supported by the Korean Ministry of Science, ICT and Future Planning (MSIP), the Institute for Information & Communications Technology Program (IITP) — under the ICT Consilience Creative program supervised by the Institute for Information & Communications Technology Promotion — and the National Research Foundation (NRF) of the Korean Ministry of Science and ICT (MSIT), the researchers explained that, “The remarkable development of AI and machine learning technologies have assisted in solving the most challenging tasks in the areas of computer vision, natural language processing, image processing, etc.”

However, they pointed out, “Recently, machine learning algorithms are extensively integrated for photo-editing applications to help create, edit, and synthesize images, and improve image quality. Hence, people without an expert knowledge of photography editing can easily create sophisticated and high quality images. Also, many photo editing tools and apps provide various interesting functionality to attract users such as face swap. For example, face swap apps are widely used to automatically detect faces in photos and swap the face of one person with another person or animal. While face swap is fun and wide-spread in social network or Internet, it can be offensive and someone might not feel comfortable if their faces are swapped or spoofed by someone else for malicious causes.”

“Therefore,” Tariq, Lee, Kim, Shin, and Woo concluded, “abusing these multimedia technologies raise significant social issues and concerns. In particular, one of them is to create fake pornography, where anyone can put a victim’s face into a naked body to humiliate and intimidate the victim. In addition, humans can manually create more sophisticated fake or face swap images using high quality photo editing tools such as Adobe Photoshop. These tools have become much more advanced to create realistic and elaborate fake images, which are difficult to determine the forgery by normal people. The step-by-step instructions and tutorial to create these types of face swaps are easily available [on] YouTube. Therefore, these technologies can be used for defamation, impersonation, and distortion of facts.”

“Furthermore,” they stated, “these fake information can be quickly and widely disseminated [via the] Internet through social media. Hence, maliciously using these machine learning-enabled multimedia technologies for image forgery can lead into significant problems in not only fake pornography creation, but also hate crimes and frauds.”

“In order to detect and prevent these malicious effect[s],” Tariq, Lee, Kim, Shin, and Woo said, “diverse detection methodologies can be applied. However, most of prior research is based on analyzing meta-data or characteristics of image compression information, which can easily be cloaked. Also, splicing or copy-move detection techniques are not effective when attackers forge elaborate images using GANs. In addition, there is no existing research to detect GANs created images. Therefore, in this paper, we tackle the problem of detecting both GANs generated human faces and human-created fake images with neural networks using ensemble methods.”

Their research proposes “neural network based classifiers to detect fake human faces created by both machines and humans,” nothing that they used “ensemble methods to detect GANs-created fake images and employ pre-processing techniques to improve fake face image detection created by humans.” Their approach focused on “image contents for classification, and do not use meta-data of images.” They reported that their “preliminary results show that [they were able to] effectively detect both GANs-created images, and human-created fake images with 94 percent and 74.9 percent AUROC score.”

Similarly, the authors of one of the ODNI sponsored projects, Yifan Wu, Fan Yang, and Haibin Ling, stated in their recent research paper, Privacy-Protective-GAN for Face De-identification, that, “Specially for the face de-identification problem, the dilemma is that, on the one hand, we want the de-identified image to look as different as possible from the original image to ensure the removal of identity; on the other hand, we expect the de-identified image to retain as much [biometric] structural information in the original image as possible so that the image utility remains.”

Now, admittedly, “While more research is needed for detecting human-created fake images due to various complexity in fake image generation,” the researchers conceded, they also emphasized that “we believe more training data can help improve performance,” pointing out, however, that, “For future work, we plan to further enhance our face detection and noise-filtering algorithms and produce more human-created fake face images. Also, we will train our models with different levels of Photoshop which might potentially strengthen our results.”

“The ideal response to the deep fake threat would be the simultaneous development and diffusion of software capable of rapidly and reliably flagging deep fakes, and then keeping pace with innovations in deep fake technology,” Chesney and Citron wrote, adding, “If such technology exists and is deployed through the major social media platforms especially, it would go some way towards ameliorating the large-scale harms described above (though it might do little to protect individuals from deep fake abuses that don’t require distribution-at-scale through a gatekeeping social media platform).”

“Unfortunately,” they said, “it is not clear that the defense is keeping pace for now. An arms race to fortify the technology is on, but Dartmouth professor Hany Farid, the pioneer of PhotoDNA (a technology that identifies and blocks child pornography), warns: ‘We’re decades away from having forensic technology that … [could] conclusively tell a real from a fake. If you really want to fool the system you will start building into the deep fake ways to break the forensic system.’ This suggests the need for an increase — perhaps a vast increase — in the resources being devoted to the development of such technologies.”

However, they also said that while “the challenges of mitigating the threat of deep fakes are real, but that does not mean the situation is hopeless.” Chesney and Citron said, “Enhancing current efforts by the National Science Foundation, DARPA, and IARPA could spur breakthroughs that lead to scalable and robust detection capacities and digital provenance solutions. In the meantime, the current wave of interest in improving the extent to which social media companies seek to prevent or remove fraudulent content has pushed companies to take advantage of available detection technologies — flagging suspect content for further scrutiny, providing clear warnings to users, removing known deep fakes, and sharing such content in an effort to help prevent it from being reposted elsewhere (following a model used to limit the spread of child pornography). While by no means a complete solution, all of this would be a useful step forward.”

Headed by Dr. Matt Turek since July, DARPA’s Information Innovation Office’s Media Forensics (MediFor) program “brings together world-class researchers to attempt to level the digital imagery playing field, which currently favors the manipulator, by developing technologies for the automated assessment of the integrity of an image or video and integrating these in an end-to-end media forensics platform,” MediFor says. And, “If successful, the MediFor platform will automatically detect manipulations, provide detailed information about how these manipulations were performed, and reason about the overall integrity of visual media to facilitate decisions regarding the use of any questionable image or video.”

As MediFore explained, “manipulation of visual media is enabled by the wide scale availability of sophisticated image and video editing applications as well as automated manipulation algorithms that permit editing in ways that are very difficult to detect either visually or with current image analysis and visual media forensics tools,” emphasizing that, “The forensic tools used today lack robustness and scalability, and address only some aspects of media authentication; an end-to-end platform to perform a complete and automated forensic analysis does not exist.”

Additionally, Chesney and Citron urged, “Congress to intervene with regulatory legislation compelling the use of such technology, but that approach would entail a degree of market intervention unlike anything seen previously with respect to these platforms and devices. This option would also run the risk of stifling innovation due to the need to pick winners even while technologies and standards continue to evolve.”

They stated that, “Legal and regulatory frameworks could play a role in mitigating the problem, but as with most technology-based solutions, they will struggle to have broad effect, especially in the case of international relations. Existing laws already address some of the most malicious fakes; a number of criminal and tort statutes forbid the intentional distribution of false, harmful information. But these laws have limited reach. It is often challenging or impossible to identify the creator of a harmful deep fake, and they could be located outside the United States …”

In their recent letter to the DNI, Schiff, Murphy, and Curbelo requested that he “consult with the heads of the appropriate elements of the Intelligence Community to prepare a report to Congress, including an unclassified version that includes” the following:

• An assessment of how foreign governments, foreign intelligence services or foreign individuals could use deep fake technology to harm United States national security interests;
• A description of any confirmed or suspected use of deep fake technology by foreign governments or foreign individuals aimed at the United States that has already occurred to date;
• An identification of technological countermeasures that have been or could be developed and deployed by the United States Government or by the private sector to deter and detect the use of deep fakes, as well as analysis of the benefits, limitations and drawbacks, including privacy concerns, of such counter-technologies;
• An identification of the elements of the Intelligence Community that have, or should have, lead responsibility for monitoring the development of, use of and response to deep fake technology;
• Recommendations regarding whether the Intelligence Community requires additional legal authorities or financial resources to address the threat posed by deep fake technology;
• Recommendations to Congress regarding other actions we may take to counter the malicious use of deep fake technologies; and
• Any other information you believe appropriate.

As Chesney and Citron alerted, without countermeasures, “Deep fakes are a profoundly serious problem for democratic governments and the world order. The United States should begin taking steps, starting with raising awareness of the problem in technical, governmental, and public circles so that policymakers, the tech industry, academics, and individuals become aware of the destruction, manipulation, and exploitation that deep fake creators could inflict.”

Comments

Popular posts from this blog

Report: World’s 1st remote brain surgery via 5G network performed in China

Visualizing The Power Of The World's Supercomputers

BMW traps alleged thief by remotely locking him in car