Readers Beware: AI Has Learned to Create Fake News Stories


Readers Beware: AI Has Learned to Create Fake News Stories

Researchers warn about the risks of computer-generated articles—and release tools that ferret out fakes

‘Large-scale synthesized disinformation is not only possible but is cheap and credible,’ says Cornell professor Sarah Kreps.
By Asa Fitch Updated Oct. 13, 2019 10:59 pm ET
Real-sounding but made-up news articles have become much easier to produce thanks to a handful of new tools powered by artificial intelligence—raising concerns about potential misuse of the technology.
What deepfakes did for video—producing clips of famous people appearing to say and do things they never said or did—these tools could do for news, tricking people into thinking the earth is flat, global warming is a hoax or a political candidate committed a crime when he or she didn’t. While false articles are nothing new, these AI tools allow them to be generated in seconds by computer.
As far as experts know, the technology has been implemented only by researchers, and it hasn’t been used maliciously. What’s more, it has limitations that keep the stories from seeming too believable.
But many of the researchers who developed the technology, and people who have studied it, fear that as such tools get more advanced, they could spread misinformation or advance a political agenda. That’s why some are sounding the alarm about the risks of computer-generated articles—and releasing tools that let people ferret out potentially fake stories.
“The danger is when there is already a lot of similar propaganda written by humans from which these neural language models can learn to generate similar articles,” says Yejin Choi, an associate professor at the University of Washington, a researcher at the Allen Institute for Artificial Intelligence and part of a team that developed a fake-news tool. “The quality of such neural fake news can look quite convincing to humans.”
Stop the presses
The first entry in a powerful new generation of synthetic-text tools was unveiled in February, when OpenAI, a San Francisco-based research body backed by prominent tech names like LinkedIn co-founder Reid Hoffman, launched the GPT-2. The software produces genuine-sounding news articles—as well as other types of passages, from fiction to conversations—by drawing on its analysis of 40 gigabytes of text across eight million webpages. Researchers developed the OpenAI software because they knew powerful speech-generation would eventually appear in the wild and wanted to handle its release responsibly.
The GPT-2 system worked so well that in an August survey of 500 people, a majority found its synthetic articles credible. In one group of participants, 72% found a GPT-2 article credible, compared with 83% who found a genuine article credible.
“Large-scale synthesized disinformation is not only possible but is cheap and credible,” says Sarah Kreps, a professor at Cornell University who co-wrote the research. Its spread across the internet, she says, could open the way for malicious influence campaigns. Even if people don’t believe the fake articles are accurate, she says, the knowledge that such stories are out there could have a damaging effect, eroding people’s trust in the media and government.
Given the potential risks associated with giving the world full access to the GPT-2, OpenAI decided not to release it immediately, instead putting out a more limited version for researchers to study and potentially develop tools that could detect artificially generated texts in the wild.
In the months that followed, other researchers replicated OpenAI’s work. In June, Dr. Choi and her colleagues at the University of Washington and the Allen Institute for Artificial Intelligence posted a tool on the institute’s website called Grover, positioning it as a piece of software that could both generate convincing false news stories and use the same technology to detect others’ artificial news by ferreting out telltale textual patterns.
Then, in August, Israel’s AI21 Labs put a language-generation tool called HAIM on its website. It asserted on its site that risks of releasing text-generation tools into the wild were overblown, and that there were beneficial uses of such automatically generated texts, including simplifying and speeding the writing process.
The human touch
Yoav Shoham, co-founder of AI21, said in an interview that the effectiveness of these text-generation tools as propaganda machines was limited because they can’t incorporate political context well enough to score points with target audiences. Even if an AI can produce a real-looking article, Mr. Shoham said, a machine can’t grasp, say, the dynamics of a feud between two politicians and craft a false story that discredits one of them in a nuanced way.
“They have the appearance of making sense, but they don’t,” Mr. Shoham said.
Plus, very often articles go off on strange tangents for reasons the researchers don’t completely understand—the systems are often black boxes, generating text based on their own analyses of existing documents.
Ultimately, Dr. Choi says, producing effective propaganda requires machines to have a broader understanding of how the world works and a fine-tuned sense of how to target such material, something only a human overseeing the process could bring to the table.
“Fine-grained control of the content is not within the currently available technology,” she says.
While so far it doesn’t appear that any of the technology has been used as propaganda, the threat is real enough that the U.S. Defense Department’s Defense Advanced Research Projects Agency, or Darpa, in late August unveiled a program called Semantic Forensics. The project aims to defend against a wide range of automated disinformation attacks, including text-based ones.
Private groups are also developing systems to detect fake stories. Along with the freely available online tool Grover, researchers at the Massachusetts Institute of Technology and Harvard introduced a text inspector (http://gltr.io/dist/index.html) in March. The software uses similar techniques as Grover, predicting whether a passage is AI-made by taking a chunk of text and analyzing how likely a language-generation model would be to pick the word that actually appears next.
But if language-generation models change how they select words and phrases in the future, detection won’t necessarily improve at the same rate, says Jack Clark, OpenAI’s policy director. Ever more complex language-generation systems are proliferating rapidly, driven by researchers and developers who are training new models on larger pools of data. OpenAI already has a model trained on more than 1.5 billion parameters that it hasn’t yet released to the public.
“Increasingly large language models could feasibly either naturally develop or be trained to better approximate human patterns of writing as they get bigger,” Mr. Clark says.
Mr. Fitch is a Wall Street Journal reporter in San Francisco. He can be reached at asa.fitch@wsj.com.

Comments

Popular posts from this blog

Report: World’s 1st remote brain surgery via 5G network performed in China

Visualizing The Power Of The World's Supercomputers

BMW traps alleged thief by remotely locking him in car