Facebook recently unveiled “proactive” scrutiny measures and administrator tools for its closed-community private groups.

It’s about time, according to many digital industry think tanks that have long considered the laissez-faire-style privacy, long afforded to Facebook’s private groups, a facilitating factor in the spread of online hate, extremism, conspiracy theories and misinformation.

Dangerous content crackdown

To be clear, content posted within private groups is supposed to be just that: private. Unlike public groups, which are open to all and do not require permission to join, private group content is effectively hidden behind a digital wall and is excluded from search results. A private group cannot be found or joined, by all and sundry. To join, Facebook users need some level of approval, membership status or even a recommendation by an existing group member. It’s for good reason. Facebook created its private groups to allow users with a special but common interests – from victims of a rare disease to suffers of bullying at school – to discuss their common issues and share support and advice in private within a closed and protected environment, away from the scrutiny of social trolls, and the unhelpful or unsupportive posts of those with adverse political or personal opinion.

This is all well and good, in theory. But those currently scrutinising Facebook’s role in the online spread of hate, extremism, conspiracy theories and disinformation point out that much of this activity occurs within the realms of the private community setting.

All this has led Facebook to upgrade its scrutiny measures for private community groups. Facebook’s vice-president of engineering Tom Alison posted a Facebook blog outlined the company’s new ‘Safe Communities Initiative’ – essentially a mix of AI, machine learning and human checkers reviewing and deleting content deemed harmful.

Rule breakers

Private group administrators will get new tools to help keep community content in line with Facebook’s rules – but administrators themselves will also come under scrutiny here too. Earlier this year, Facebook updated its policy to pay more attention to administrator and moderator behaviour. Admins who repeatedly break the rules, or invite members with a track record of repeated rule-breaking, may be required to view posts before they are published on the private group, according to Alison. Repeated offences could result in a group being taken down.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

The question now is how administrators, and the members of the private groups they administrate, will react to these new levels of scrutiny. Some critics of the new Safe Communities Initiative say such measures can only serve as a disincentive for genuine online community support. Others ask: how ‘private’ can a private group be if it is subject to ongoing monitoring? Some observers say Facebook’s actions will only push those who have used the platform for spreading hate and disinformation onto other platforms – or worse, underground, making them less easy to detect and therefore potentially more dangerous. Other Facebook users worry that groups may be unfairly judged, or even removed, thanks to AI errors or even a human individual’s error of judgement.

Which goes to the heart of Facebook’s challenge today. Can the social network assert itself as a platform for cutting-edge private community debate, checked by Big Brother?

Which leads to another question: Who’s checking the social and ethical mores of Big Brother?