Nearly half of Twitter accounts discussing coronavirus are likely bots, researchers say


Nearly half of Twitter accounts discussing coronavirus are likely bots, researchers say
By Kelly Taylor Hayes May 22, 2020 Updated
Misinformation about COVID-19 has spread far and fast online. 
PITTSBURGHNearly half of the Twitter accounts sharing information about the novel coronavirus are likely bots, according to researchers at Carnegie Mellon University.
Researchers analyzed more than 200 million tweets discussing coronavirus or COVID-19 since January. They found that nearly half were sent by accounts that behave more like a convincing bot than an actual human. 
Of the top 50 influential retweeters, 82% were likely bots, the research showed. Out of the top 1,000 retweeters, 62% were likely bots.
More than 100 types of inaccurate COVID-19 stories were identified by researchers, including misinformation about potential cures and conspiracy theories — such as hospitals being filled with mannequins or the coronavirus being linked to 5G towers. Researchers said bots are also dominating conversations about ending stay-at-home orders and "reopening America."
The team said it was too early to point to specific entities that may be behind the bots “attempting to influence online conversation.”
"We do know that it looks like it's a propaganda machine, and it definitely matches the Russian and Chinese playbooks, but it would take a tremendous amount of resources to substantiate that," Kathleen Carley, a professor in the School for Computer Science at Carnegie Mellon, said in a statement.
Carley said she and her colleagues are seeing up to two times as much bot activity as the team had predicted, based on previous natural disasters, crises and elections.
The team uses multiple methods to identify which accounts are real and which are likely bots. An artificial intelligence tool analyzes account information and looks at things such as the number of followers, frequency of tweeting and an account's mentions. 
"Tweeting more frequently than is humanly possible or appearing to be in one country and then another a few hours later is indicative of a bot," Carley said.
"When we see a whole bunch of tweets at the same time or back to back, it's like they're timed. We also look for use of the same exact hashtag, or messaging that appears to be copied and pasted from one bot to the next," Carley added. 
The team at Carnegie Mellon is continuing to monitor tweets and said posts from Facebook, Reddit and YouTube have been added to their research.
A Twitter blog post this week from Yoel Roth, the head of site integrity, and Nick Pickles, the global public policy strategy and development director, calls the word “bot” a “loaded and often misunderstood term.”
“People often refer to bots when describing everything from automated account activity to individuals who would prefer to be anonymous for personal or safety reasons, or avoid a photo because they’ve got strong privacy concerns,” the post states. “The term is used to mischaracterize accounts with numerical usernames that are auto-generated when your preference is taken, and more worryingly, as a tool by those in positions of political power to tarnish the views of people who may disagree with them or online public opinion that’s not favorable.”
Twitter told NPR it has gotten rid of thousands of tweets with misleading and potentially harmful information about the coronavirus.
In the blog post, Twitter adds that not all forms of bots are necessarily violations of Twitter — such as customer service conversational bots that automatically find information about orders or travel reservations.
The company said it is proactively focusing on “platform manipulation,” which includes the malicious use of automation aimed at undermining and disrupting the public conversation, such as trying to get something to trend.
For someone who is unsure about an account’s authenticity, Carnegie Mellon researchers said to closely examine it for red flags. If the account is sharing links with subtle typos, many tweets are being posted in rapid succession, or a user name and profile image doesn’t seem to match up — it may be a bot.
"Even if someone appears to be from your community, if you don't know them personally, take a closer look, and always go to authoritative or trusted sources for information," Carley said. "Just be very vigilant."
This story was reported from Cincinnati.

Comments

Popular posts from this blog

BMW traps alleged thief by remotely locking him in car

Report: World’s 1st remote brain surgery via 5G network performed in China

New ATM's: withdraw money with veins in your finger