The Next Big Internet Threat

The Next Big Internet Threat

You think election interference is as bad as it can get? Something even worse is just around the corner.


People are worried about what’s happening on the internet—and they should be. In just the past few months, Americans have seen massive social turmoil over the deletion of conspiracy theorist Alex Jones’ accounts from digital platformsthe public disclosure of criminal charges for Russian interference in the upcoming midterm elections; and other continuing developments in disinformation operations, such as the the Kremlin’s new digital campaign to turn U.S. public opinion against taking military action against Bashar Assad’s regime in Syria.
But these latest incidents are only part of a far bigger trend. In our novel digital age, it has become far too easy for bad actors to spread harmful content far and wide, swiftly and with just the click of a button.
All told, the internet age has seen four major waves of digital threats. None of these challenges has been entirely resolved, and the more recent of them remain serious threats, not just to the integrity of online dialogue but to American security and democracy. But the fifth wave is now fast upon us—and it might prove the thorniest of all.
The first wave was the exploitation of children, which should’ve been an omen of how dark the many corners of the internet could get. Even as dial-up modems delivered bytes at maddeningly slow speeds, internet users swiftly took advantage of their newfound connectivity, freedom and anonymity to obtain and share child pornography—a serious problem and, moreover, a crime. In 2009, the campaign to stop the spread of child pornography had a major breakthrough with the Microsoft- and Dartmouth-developed PhotoDNA, which created a database of digital signatures to help companies remove previously identified exploitative imagery from their platforms and prevent it from being uploaded in the future. Though the problem persists, this kind of technology in conjunction with the strong leadership provided by the National Center for Missing and Exploited Children has assured that social media are no longer the haven for exploitative minds they once were.
Next came trolling. In this case, speed of communication, anonymity and, especially, a willingness to say online what one would never say in person yielded savage waves of vitriol. High school bullies picked on loners; misogynists harassed women; racists pursued minorities. Even worse, trolling moved from the dastardly to the deadly as certain victims, faced with these convulsions of hate, committed suicide. Again, technology companies responded, this time by trying to “clean up” their platforms through user complaints and reviewer responses. It’s helped—though trolling continues to plague little-known users as well as Hollywood stars like Leslie Jones, who chose to leave Twitter rather than continue to endure a flood of misogyny and racism. And the most prominent descendant of trolling—online hate speech—has now taken root on leading social media platforms to such a tremendous degree that it often takes a massive groundswell and public outcry just to oust one account spewing racist or other viciously hateful content—all in spite of the tech companies’ unequivocal, if underenforced, anti-hate speech policies.
Third came terrorist recruitment and radicalization. While al-Qaida’s Yemeni affiliate and the Somalia-based al-Shabab had experimented with disseminating magazines and videos through social media, it was ISIS that took it all to another level. As the terrorist group swept through Syria and into Iraq, hashtag campaigns, Facebook pages and YouTube channels bombarded global audiences with graphic videos of beheadings as well as claims to brotherly affection, all yielding tens of thousands of foreign fighters who swelled the group’s ranks—plus others who killed their fellow citizens at home. After sustained, direct appeals from governments worldwide suffering from ISIS’ violence, tech companies accelerated their efforts to contest ISIS’ virtual safe haven on their platforms and began creating a database of terrorist content roughly akin to the child pornography database still swelling with new images. It’s a step in the right direction—but, here too, the problem is far from solved.
Most recently came foreign election interference. The Kremlin’s social media-fueled campaign to undermine democracy didn’t start with America’s 2016 presidential election—for example, Moscow meddled with the United Kingdom’s earlier Brexit referendum—but the degree and boldness of Moscow’s efforts to sway the election toward Trump, the stakes, and the distinct possibility that Russian activity made the difference all awoke the global public to this fourth wave of content policy challenges. As the evidence of social media’s role in this foreign influence campaign mounted, tech companies gradually moved through the five stages of grief, dwelling especially on denial but, recently, arriving at acceptance. Facebook, for example, has begun suspending accounts for what it calls “inauthentic” activity intended to polarize the electorate. But that still leaves a lot of work to be done, as we’ve discussed at length elsewhere.
So, what will we see next in the social media universe? Thus far, we’ve witnessed four major waves of offensive content that have tracked the darkest tendencies in humanity—content that has exploited people (sex), spread vitriol (hate), encouraged ghastly attacks (violence) and duped electorates (power). Going forward, we fear a new kind of trend will emerge: “reputational exploitation,” feeding off the human tendency to maximize self-interest while paying no heed to the rest of society—namely, through falsely disparagement of others for one’s own benefit.
Reputational exploitation would propagate various forms of content—and power the campaigns behind them—in an attempt to destroy, even temporarily, a competitor’s reputation. This could take commercial form. Imagine a situation in which one investor wishes to spread negative information about a specific company so that she can artificially create and seize a forthcoming opportunity to short its stock. Or consider a company that wants to move public opinion against one of its rivals, so that it can attract some of the potential revenues at hand. (We’ve already seen the early manifestations of this through schemes to hack media outlets in efforts to obtain corporate press releases before they’re released.)
This wave not only can throw into disarray financial markets but can upend foreign policy, in something of a flip side to the third wave’s targeting of domestic politics. Imagine situations in which one or more countries seek to degrade a regional rival’s reputation. We saw hints of this type of activity last year, when—at least according to Qatar—Qatar’s sudden isolation by its neighbors and the United States was sparked when a hacker broke into a news site and published fake statements, attributed to Qatar’s emir, praising Iran and criticizing the United States.
The types of reputations to be targeted can take other forms, too. Right now, all of us generally trust weather forecasts from reputable sources, especially when backed by the government; and these forecasts in combination with our anticipated responses to them inform the practices of utilities supplying all of us with power, water and other essentials. For example, when the Weather Channel reports that a hurricane is approaching, utilities prepare for damage by surging workers to the potentially affected area while also anticipating drains on power and water usage in the areas to which those fleeing the hurricane are likely to relocate. Imagine essentially manipulating those reputations by spreading false social media commentary—perhaps through bots that, en masse, spread word that a purported storm is coming—on weather patterns, power usage and related trends. Such a campaign could disrupt our critical infrastructures by causing populations and even utilities themselves to react to nonexistent trends and overuse or over-supply key resources like power and water.
Reputational exploitation will emerge and challenge our democratic institutions and fair markets because the world of content creation and distribution is fast-evolving but minimally monitored or regulated. We are at a stage when artificial intelligence will increasingly be used to create advertisements and content—largely because it will be simpler and cheaper for machines instead of humans to manage the entire mechanical value chain behind content creation of digital advertisements as well as unpaid content. Furthermore, AI will enhance the routing of specific content to the audiences most likely to find that content engaging—even if it’s misleading or outright false. Overall, the mastery and integration of AI throughout the tech sector will not only benefit the powerful players looking to peddle malicious content but also empower the little guy; it will enable smaller entities, including illegitimate and nefarious actors, to enter the social media ecosystem and wreak havoc in their perhaps narrow but still damaging ways. And we can expect that those entities will take advantage of such opportunities. After all, the past four waves have time and again illustrated this dynamic at play.
All of this means that disinformation not only is here to stay but also—with the emergence of deep fakes, contingency-based ad targeting that shows ads to a user based on her specific history and location and other technological advances—is getting increasingly dangerous, fast. Its face will only continue to evolve as more and more nefarious entities get in on the act.


Popular posts from this blog

Report: World’s 1st remote brain surgery via 5G network performed in China

BMW traps alleged thief by remotely locking him in car

Visualizing The Power Of The World's Supercomputers