There is now compelling evidence for Russian interference in the 2016 Presidential election, from the spread of propaganda and disinformation to hacking. Today a study outlining the role played by Twitter bots has been published, which gives greater insight into just how influential these tools were.
Although the exact influence of bots in the 2016 election outcome is not yet clear, numerous studies have identified Russian bots engaging in ‘information warfare’, and potentially affecting the outcome of recent political events.
Despite Twitter deleting around 6% of accounts in a recent cull of “suspicious accounts”, the platform is host to networks of bots: automated twitter accounts that imitate human users to spread what is often false information to support a particular agenda.
Russian bots in the 2016 election
A study by Indiana University researchers sheds light on the problem, further confirming that bots played a “disproportionate role” in spreading misinformation before, during and after the 2016 Presidential election.
Published today in Nature, the study looked at 14 million messages and 400,000 articles shared on Twitter between May 2016 and March 2017, or the end of the 2016 Presidential primaries and the Presidential inauguration on January 20 2017, to measure the impact of Twitter bots in the 2016 election.
It found that a small number of accounts were able to have a significant level of influence. Just 6% of Twitter accounts that the study identified as bots were enough to spread 31% of the “low-credibility” information on Twitter.
Information was labelled as low-credibility in the study based on whether it was from an outlet that regularly shared false or misleading information, according to lists produced by independent third-party organisations. These sources include outlets with both right and left-leaning points of view.
How Twitter bots spread misinformation
The study also identified tactics used by bots to shape public opinion more effectively. Techniques such as amplifying a single tweet across hundreds of automated retweets, repeating links in recurring posts, and targeting highly influential accounts are used to spread a message and make it appear to be from a legitimate, human user.
These techniques aid the spread of false information, as the continuous sharing boosts a post’s visibility so it is more likely to be shared broadly. This means that although bots may represent a small percentage of accounts, their potential to be influential is wide-reaching.
Co-author Giovanni Luca Ciampaglia, an assistant research scientist with the IU Network Science Institute explained why this made bots in the 2016 election particularly dangerous:
“People tend to put greater trust in messages that appear to originate from many people. Bots prey upon this trust by making messages seem so popular that real people are tricked into spreading their messages for them.”
The researchers also demonstrated how the volume of fake news spread on Twitter is influenced by the number of bot accounts. They did this by running an experiment inside a simulated version of Twitter and found that the deletion of 10% of the accounts in the system, based on their likelihood to be bots, resulted in a significant drop in the number of stories from low-credibility sources circulating on the network.
This indicates that social networks should be doing more to monitor the number of bot accounts operating on their site.
Stopping the bot invasion
Moving forward, the study suggested steps companies could take to limit the spread of misinformation on their networks. Bots continue to be a problem several years after the 2016 election, with a recent study by Recorded Future highlighting that bot activity significantly increased in the run up to the US mid-terms earlier this month.
Techniques suggested by the study included improving algorithms to automatically detect bots, and requiring a “human in the loop” to reduce automated messages in the system using tools such as CAPTCHA.
Although their analysis focused on Twitter, the study’s authors added that other social networks are also vulnerable to manipulation.
The research group has also recently launched a tool to measure “Bot Electioneering Volume.” Created by Indiana University PhD students, the programme measures and displays the level of bot activity around specific election-related conversations, user names and hashtags.
Professor in the Indiana University School of Informatics, Computing and Engineering and study lead Filippo Menczer believes that efforts to reduce the number of bots are needed to help solve the problem:
“As people across the globe increasingly turn to social networks as their primary source of news and information, the fight against misinformation requires a grounded assessment of the relative impact of the different ways in which it spreads. This work confirms that bots play a role in the problem, and suggests their reduction might improve the situation.”