1. Comment
March 23, 2022

How AI could de-escalate online conflict

Just as social media is a space for protest and activism, it is also a place for tantrums – so what is the connection with AI?

Currently users can send abuse, inflammatory content, and antagonize each other to no end. The rise of internet trolling exemplifies the attempts by a small few to create conflict and hostility within online communities.

But even those who are not trolls can fall into patterns of negative communication on social media. Can another technology help us to be less harsh online?

AI could be the answer

This is where AI comes in. Many of us are familiar with the smart replies that automatically appear on Outlook, Gmail, and other communication services when writing a message or reply. But AI can go beyond just populating likely responses and could instead curtail the use of negative ones – acting as a buffer state between users not seeing eye to eye.

An example of AI used for this purpose is seen in co-parenting apps. Apps like coParenter, OurFamilyWizard, Amicable, and TalkingParents work as scheduling tools for divorced parents, but also have AI-based features to mediate family interactions. The apps perform sentiment analysis of messages before they are sent and can prompt senders to reconsider any negative language. Like a traditional mediator, the apps’ messaging services prompt parents to behave with more civility towards each other and, in the long run, aim to encourage positive language in most interactions. The hope is that this behavior will carry on outside the app. While AI’s affect detection is not perfect and parents can choose to ignore the suggestions, it has proved a successful method to nudge users towards more positive relationships.

A similar use of AI has been trialed by some social media apps. Twitter widely introduced its prompts feature in 2021 on accounts that had English language settings. The feature detected negative language in tweets in the same way as the co-parenting apps and gave users a moment to reconsider. The users could decide to edit the tweet, delete it altogether, or send it through. From its beta phase testing, Twitter concluded that 34% of users revised their potentially offensive or rude tweets when prompted.

Though the potential is huge, security will be an issue

Among other methods to control online hostility, AI sentiment analysis could be more broadly deployed across all social media platforms. Introducing this type of vetting of social media posts will raise concerns of privacy and policing, so social media companies will need to be transparent in their methods and decision-making from the start. Twitter, for example, included a feedback section as part of its prompts features to increase the accuracy of its affect detection and limit the perception of reclaimed language or that of underrepresented communities as negative.

The social platform is still a long way from being a friendly space but is taking the steps to trial a method that could shape the future of our online interactions. Social media platforms will continue to struggle with this balance between moderating and over-policing. However, AI does have the potential to nudge users towards more positive interactions over time. Social media is a great tool for community building and AI could help users and companies see online unrest and outrage as an avoidable byproduct of social media, and not its foundation.