Can AI Detect Disinformation? A New Special Operations Program May Find Out

Can AI Detect Disinformation? A New Special Operations Program May Find Out 

Air Force, U.S. Special Operations Command fund year-long effort to train a neural net to rank credibility and sort news from misinformation.

BY PATRICK TUCKER TECHNOLOGY EDITOR OCTOBER 2, 2020

For all the U.S. military’s technical advantages over adversaries, it still struggles to counter disinformation. A new software tool to be developed for the U.S. Air Force and Special Operations Command, or SOCOM, may help change that.

“If you don’t compete in the information space, regardless of how good your operations are, your activities are, you will probably eat a shit sandwich of disinformation or false reporting later on,” Raymond “Tony” Thomas, a former SOCOM chief, said in an interview. “We certainly experienced that at the tactical level. That was the epiphany where we would have good raids, good strikes, etc. and the bad guys would spin it so fast that we would be eating collateral damage claims, etc. So the information space in that very tactical space is key.

It even “stretches to the strategic space,” said Thomas, meaning that disinformation can spread until it affects larger geopolitical realities.

Thomas now serves as an advisory board member for Primer, a company that on Thursday announced a Small Business Innovation Research contract to develop software over the next year to help analysts better—and much more quickly—survey the information landscape and hopefully detect false narratives that show up in the public space.

Primer’s neural network technology can scan large amounts of text and extract themes and other information based on the frequency and prominence of words and phrases. It’s the sort of thing that can be very useful if you have a lot of text you want to very quickly summarize in an accurate headline, a capability they demonstrate here. To train their headline-writing neural net, they used a corpus “of millions of publicly available document-title pairs: news articles and headlines” according to their paper on the subject.

The new contract will help Primer to build a platform “to automatically identify and assess suspected disinformation,” according to a press release from the company. “Primer will also enhance its natural language processing platform to automatically analyze tactical events to provide commanders with unprecedented insight as events unfold in near real-time.”

It will be a slow process. There’s a big difference between teaching a neural net to summarize a news article or paper and write a headline and teaching it to separate fact from fiction. How do you train software to distinguish trustworthy information from untrustworthy claims? Primer plans to do it the same way one might teach a child: by teaching it to recognize sources of credible information versus less credible information, which takes time, practice, and consistent scrutiny. That will take time and a fair amount of important data input from users and operators, said John Bohannon, the company’s director of science.

Bohannon showed us an example of where the technology is today, in the context of the emerging conflict between Armenia and Azerbaijan. The network can find news, sources and social media posts about the conflict and segment that information into groups, based on who is saying what about a particular event or incident, such as a military strike. This immediately gives the user a sense of what different groups and different governments are claiming. You can also see how those reporting entities have changed the way they’ve discussed the situation in question overtime. Essentially, at present, the network gives you much of the same information that you might get from a newspaper story covering an incident or event.

The hope over the next 12 months is to add data that comes from operators responding and interacting with the product and the information it presents. Those users in SOCOM and the Air Force will be able to determine—and provide information on—which of the sources is the most credible, based on what they’ve seen. Their input will allow the network over time to develop a sense of which claims are more likely to be factual based on the source and what other sources are saying that’s different. “The next level of this system is one that’s… more predictive, allows you to see and make inferences that you can test along the way,” Bohannon said.

Eventually, the platform will be able to award a particular claim or news item a sort of accuracy score based on those factors, whether the source is credible, what other perhaps more credible sources are saying, etc. But if you’re not sure of how the network reached its conclusion, you can see the process —and the news sources—it used to make that determination. That's the ambition, anyway.

It’s a first step to tackling a larger problem, one that will only grow. Sean Gourley, Primer’s founder and CEO, described how the cost of using disinformation is relatively low for authoritarian regimes that aren’t accountable to voters or democratic allies when they lie. Couple that with how quick and easy it is to produce and spread disinformation online and you run into a big problem for which the U.S. and other militaries from democratic countries have little defense. “Information attacks are cheaper to carry out than identity. There’s an asymmetry here. It’s much like an IED explosion. You can put it down very cheaply but the cost of defending against it is very high.” he said. “This is not something that’s going to be solved by humans… You're bringing a knife to a gunfight if you’re going to bring humans to this problem.”

New tactics for recognizing disinformation are only part of the solution. Military watchers have long complained that the United States is too slow and cautious in responding to information warfare attacks, a point that Thomas reiterated.

“It’s no stretch to say it was easier to drop a Hellfire on someone or do something kinetic than it was to do something offensively in the information domain. It's antithetical to us. … but if your adversaries are playing in that space, you have to respond,” he said.

https://www.defenseone.com/technology/2020/10/can-ai-detect-disinformation-new-special-operations-program-may-find-out/168972/

Comments

Popular posts from this blog

Report: World’s 1st remote brain surgery via 5G network performed in China

Visualizing The Power Of The World's Supercomputers

BMW traps alleged thief by remotely locking him in car