Research carried out by cybersecurity firm Duo Security have found more proof that Twitter bots are being used to inflate the popularity of content, finding evidence for 7,000 potential Twitter amplification bots in just 24 hours.
Amplification bots, which automatically retweet content in their droves, can be purchased by a user to artificially inflate their tweets to make them seem more popular and give the perception of credibility.
Timeline for Automation
- September 18, 2019
- September 13, 2019
- September 11, 2019
Other times, they can be employed on an unaware Twitter user, either to promote an agenda or as a form of harassment.
To get a sense of the scale of this activity, Duo Security data scientist Olabode Anise and principal R&D engineer Jordan Wright created a bot crawler script that trawls Twitter and detects amplification bots.
In just one day of running the script, the pair found over 7,000 potential amplification bots. It’s a feat that’s particularly striking when you consider that Twitter only allows the script to check 100 users and permits just 75 requests in a 15-minute window.
“Amplification bots operate as a group,” said Anise and Wright. “In a typical scenario, a user looking to manipulate the platform will pay to have a certain tweet retweeted a set number of times.
“This means that the bot operator will allocate a number of bots to go and retweet the tweet.”
Duo Security’s bot crawler enabled them to map out a network of bots operating together in a coordinated way.
In the below time lapse, the green dots represent amplified tweets, while the black dots represent the potential amplification bots.
“Our crawler helps tell the full story, letting us start with a small number of identified bots retweeting a certain tweet, and mapping out a much larger botnet of bots operating together in a coordinated way,” said the Duo Security team.
“This lets us tackle the problem holistically, as opposed to going after a small number of bots at a time.”
3 Things That Will Change the World Today
How to tell an amplification bot from a real user
To create the bot crawler, the team had to first establish what amounts to “normal behaviour” for a human user. They reasoned that typical behaviour is for a tweet to receive more likes than retweets. To back this theory up, they analysed a dataset of 576 million tweets and found that 80% of tweets with over 50 retweets had more likes than retweets.
To further narrow their criteria, they established that accounts made up predominantly of retweets also indicate a strong likelihood of being a bot.
Another giveaway for an amplification bot is a timeline that is not in chronological order.
“To be effective in spreading content through retweets, bots have to exhibit non-normal behaviour such as spiking the ratio of retweets to likes for a given tweet,” the pair said. “The ratio we used as a threshold was seldom seen in legitimate settings.”
Amplification bots can be used for political means
This latest study focuses on one particular type of bot that amplifies content, not accounts.
While it did not carry out analysis on the types of tweets that were being amplified or specific individuals, they did find users with “varying degrees of popularity having their content amplified”, but added that “it’s impossible to know if the user paid for this to happen”.
That’s because sometimes bots amplify the content of random people to “blend in”. Other times it can be an “attempt at harassment”.
While Duo Security did not look into specific individuals being amplified, previous investigations have found “significant evidence” that Russian bots interfered in the 2016 Presidential election.
Further studies have corroborated this and even said that these bots had twice as much influence as human Twitter users in influencing public opinion.
How can Twitter solve the bot problem?
Twitter regularly suspends and removes bots from the platform, ridding the platform of thousands of bot accounts that discouraged Democrats from voting in the US midterm elections.
However, the Duo Security said that Twitter can go further, highlighting greater collaboration between Twitter and the academic and security researchers as an effective way to remove more bots.
They also said that loosening Twitter’s API limits would assist researchers studying Twitter amplification bots and tackle the problem.
“For example, we studied amplification bots that retweet content because it’s not currently possible to see who liked a particular tweet using the API,” they said.
“In addition to this, we adapted our crawler to work with only the latest 200 accounts that retweeted a tweet, since that’s all that’s available through the API (even if thousands of bots amplified the tweet).
“Finally, the rate limits in place on API endpoints that return who retweeted a particular tweet are heavily reduced the reach our crawler can have.”
If the Twitter bot problem persists, Anise and Wright warn that it threatens to undermine the ability to have “honest, authentic conversations”.
“Amplification bots manipulate these conversations, degrading the trust users have that the content they’re engaging with is legitimately popular or credible.
“No one wants to feel like they’ve been tricked.”