Twitter is introducing a “strike system” as part of its latest effort to control the spread of misleading content.

Announced in a blogpost on Monday, Twitter said it will begin labelling tweets containing misinformation related to Covid-19 vaccines as part of its strategy to “remove the most harmful Covid-19 misleading information from the service”.

For accounts that repeatedly violate its Covid-19 misleading information policy, a strike system will be applied. Accounts that have had a tweet deleted will accrue two strikes, and accounts that have had a tweet labelled as harmful will accrue one strike.

Two strikes will result in a user’s account being locked for 12 hours, with accounts locked for 24 hours in the event of a third offence. A seven-day account lock is the penalty for four strikes, followed by a permanent ban for five or more.

Twitter begun labelling potentially harmful or misleading information related to Covid-19 in May last year, and ramped up its efforts in December by expanding its policy to include debunked claims about vaccine side-effects, false claims that Covid-19 does not exist, and statements about vaccines that invoke a deliberate conspiracy. According to the company, it has removed over 8,400 tweets since introducing its Covid-19 guidance.

The task of labelling tweets that violate its Covid-19 misleading information policy will be carried out by Twitter team members and will only cover English-language content at first. These decisions will then be used to train automated tools to identify and label similar content. Twitter said its goal is to eventually use a combination of both humans and automation to identify Covid-19 vaccine misinformation.

The social media giant also recently launched its Birdwatch pilot, which takes a community-driven approach to misinformation and allows pilot participants to identify information in tweets they view as misleading and providing additional context to explain their decision on a separate Birdwatch site.

While the general consensus is that current efforts to tackle Covid-19 misinformation by social media giants are a step in the right direction, with 500 million tweets sent each day worldwide, stemming the flow of conspiracy theories and other types of harmful content is no easy task, raising the question of whether platforms could be doing more.

Facebook has already taken a similar stance on vaccine misinformation and begun removing posts containing false information related to the Covid-19 vaccine in February. It has introduced a similar strike system for users that repeatedly violate its policies.

Andy Patel, researcher at F-Secure, commented that the time it takes to label content may leave enough of a window for it to spread: “While applying labels to misleading content does help inform the public of disinformation, it takes time to track down, fact-check, and flag those pieces of content. Since disinformation tends to spread virally, plenty of users will see and share a piece of content before it has been appropriately labelled. This proposed mechanism also doesn’t take into account the fact that many users of the platform intentionally share disinformation as soon as it is published in order to maximise exposure.”

Patel also believes that further clarification is needed on the strike system.

“On the subject of the proposed ‘strike’ system, Twitter states ‘Persistent spreads of fake Covid-19 vaccine content will receive a ‘strike’,” he said. “They don’t define what persistent – how many times a user publishes a tweet of this nature – means. They also don’t state whether accounts that retweet such content will receive similar lockouts. The account lockouts they’ve mentioned will do nothing to stop the people behind coordinated disinformation networks who are used to creating new accounts in response to lockout or suspension actions. In my opinion, the measures Twitter has proposed will do little to stop the spread of any type of disinformation.”


Read More: Twitter and Facebook flag Trump’s baseless election claims.