Instagram will begin testing features that blur messages containing nudity to safeguard teens, Meta announced on Thursday (12 April).

The move comes as the social media company looks to quell concerns over harmful content on its apps. 

Meta is facing pressure in the EU and US over allegations that its apps are harmful and addictive to young people. 

The new feature would be implemented in Instagram’s direct messaging service and use on-device machine learning to analyse whether an image involves nudity.

Meta confirmed that the feature would be automatically turned on for under 18-year-olds. 

The protection feature will work in end-to-end encrypted chats, where Meta has no access to the images, as the feature works by technology in the device itself, Meta said. 

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

Meta also announced it was developing technology that can aid in identifying accounts that could be engaging in sextortion scams. The tech giant said it was testing a feature that will alert users who may have interacted with a suspicious account. 

In 2023, more than 30 US states filed a lawsuit against Meta, alleging the harmful and addictive design of its social media platforms, Facebook and Instagram. 

The lawsuit alleged that Meta deliberately deployed features on its social media platforms that it knew would be “psychologically and physically harmful” to its younger users.

These include features that would promote “compulsive” and “prolonged” daily use, as well as “infinite scroll” features that autoplay content. 

The states also complained about the disruptive notifications of Meta’s social media platforms, which they claim cause younger users and children to “over-engage” with Facebook and Instagram. 

Alongside design and notifications, the lawsuit further claims that Meta’s collection and use of user data was being utilised to find ways of promoting user engagement.