Twitter is experimenting with new misinformation feature

By Ellen Daniel

Twitter is experimenting with a new feature to flag the spread of misleading information.

Last week, NBC news reported that Twitter had developed a demo that uses orange and red labels to indicate whether a tweet from a public figure could be spreading misinformation.

The news outlet shared a screenshot of the new feature, in which users would be alerted about “harmfully misleading” tweets. It reported that tweets would then be corrected by verified fact-checkers.

Twitter confirmed that it was a possible new feature for the platform, but does not have a schedule for its rollout.

This comes as the World Health Organisation has likened the spread of online misinformation about novel coronavirus, such as bogus cures or conspiracy theories about its origin, to an “infodemic”.

Earlier this month, Twitter announced that it was banning users from “deceptively [sharing] synthetic or manipulated media that are likely to cause harm” and that it may label Tweets containing synthetic and manipulated media.

Is the Twitter misinformation feature tied to the 2020 election?

Yuval Ben-Itzhak, CEO at Socialbakers  believes that this potential new feature ties in with the 2020 Presidential election:

“As we see the social media platforms each working to truly combat the fake news epidemic, Facebook kicked off the year with an announcement that it was banning ‘deepfakes’ on the platform. This week a leaked demo of a new Twitter feature showed huge red labels beneath tweets that spread misinformation. The timing is no coincidence, with this year’s US election likely to bring fake news back to the top of the news agenda. When – or even if – the feature will fully roll out is unclear, but it’s certainly a clarion call to users and brands that, much like Facebook, Twitter is also committed to working to be a home for fact-checked, policed content that doesn’t mislead.

“As for safety online, particularly that of children, TikTok announced the launch of its ‘Family Safety Mode’ in the UK, to give parents the ability to set limits on their children’s use of the app. Unveiling a new ‘restricted mode’ that filters out inappropriate content and enables parents to shut off features like messaging on their children’s accounts, TikTok has set the tone that the future of safer social media is really putting control into the hands of users, letting them decide what content they want to see online.

“Last week in a letter to the Financial Times, Facebook CEO Mark Zuckerberg pushed for more democratic tech regulation. Governments around the world are wrestling with how best to regulate social media – but it’s the platforms themselves who have come out with bold, meaningful steps to make social media safer, less harmful, and more transparent.”

Read more: Twitter results show efforts to combat hate speech have “started paying dividends”.