1. Comment
  2. Comment
December 15, 2021

Instagram CEO’s testimony will trigger a crackdown on Big Tech’s use of algorithms

By GlobalData Thematic Research

Instagram CEO Adam Mosseri’s testimony has highlighted Big Tech’s unregulated use of algorithms, reigniting discussions regarding algorithmic accountability. Mosseri spoke in front of US Congress, aiming to defend the platform’s consideration for user safety. Until now, the lack of social media regulation has drawn comparisons to the Wild West. Public trust in Big Tech is at an all-time low and regulating Big Tech’s rampant use of algorithms is uncharted territory.

How important is user safety to Big Tech?

Instagram made the timely announcement of a set of parental controls to improve teen safety on its app. This includes time limits that can be set by the parents of users, as well as notifications for parents if their child reports someone. In addition, the app will now place restrictions on mentioning or tagging teenage users. Despite Mosseri’s voluntary testimony, the negative sentiment towards parent company Meta (formerly Facebook) still stands, as it has consistently underplayed its potentially negative impact on users. Meta cannot guarantee user safety until it publishes the algorithms it uses to drive engagement, especially as other platforms such as TikTok have already done this. Meta and its Instagram platform must prioritize transparency to prove that it is safe.

Self-regulation is no longer enough

Mosseri argued that an industry-specific body that will determine the set of standards for Big Tech regarding user safety is the best approach to protect teenagers from the impacts of social media. This body would receive input from parents and regulators as to how it would verify the age of users and create parental controls. Although Mosseri does not seem to suggest Meta regulating itself, it does suggest that the broader tech industry would coordinate self-regulation for the industry. However, a Meta whistleblower revealed that Instagram was aware that it made the mental health of its teenage female users worse. However, with the Big Tech industry often seeming to put profits above user safety, any move towards self-regulation alone would be unlikely to reassure users or regulators.

Greater regulation is coming

Meta will face greater scrutiny over its impact on users, as regulators insist on greater transparency. However, this is not unique to just Meta alone. International regulators will attempt to crack down on the influence of Big Tech by requiring the formal regulation of underlying algorithms, with Europe and China leading the way.

Big Tech has been given free rein for too long. Their understanding of how proprietary algorithms work has allowed them to stay one step ahead of regulators, but regulators are now catching up. Proposed legislation from China and the European Commission demonstrates that regulators are keen to control the impact of Big Tech. Soon, their algorithms will have to become more transparent.

The UK is following suit

A further example of more proactive regulation is the UK’s proposed historic Online Safety Bill, which aims to regulate social media through new offences and fines. The bill will set a series of standards to protect users from harm and will look to fine Big Tech not just by removing illegal content, but by also regulating the impact of algorithms. It also requires that companies appoint a named individual employee who would be personally liable for any such violations. This ensures that Big Tech cannot separate themselves from their own algorithms, but both the company and named individuals are held directly accountable for them.

As the harmful impacts of social media become increasingly apparent, Big Tech will become more accountable. Self-regulation of the tech giants will no longer suffice.