The hearing called by The House Committee on Energy & Commerce on Thursday 24th March addressed the CEOs of Facebook, Twitter, and Alphabet on the topics of misinformation and disinformation on the internet. The CEO testimonials highlighted the self-regulation content removal strategies they have deployed, but this has not been enough to curb the underlying misinformation problem: namely the algorithms pushing misinformation towards susceptible parties.
Misinformation has been pervasive in 2021, impacting issues from elections to the pandemic. The Capitol Hill riots in the US, where five people were killed, gathered momentum on social media.
Self-regulation removes harmful content, but this is a blanket response that does not go far enough to protect vulnerable users receiving targeted misinformation. Social media companies cannot keep on top of all false posts, and some have even been found to promote misinformation, undermining self-regulation strategies. The US should follow the EU’s lead and make algorithmic transparency a cornerstone of future regulation.
CEO testimonials fall short of standards needed to prevent misinformation reaching vulnerable people
2020 saw unprecedented attempts to regulate misinformation by Big Tech companies themselves. At the Congressional hearing, Facebook CEO Mark Zuckerberg explained that Big Tech should continue to self-regulate in this way, by removing false content. Zuckerberg said: “I believe that Section 230 [section 230 of the Communications Decency Act (CDA)] would benefit from thoughtful changes to make it work better for people […] Instead of being granted immunity, platforms should be required to demonstrate that they have systems in place for identifying unlawful content and removing it.” He is referring to Facebook’s independent third-party fact-checkers, meaning that Facebook would continue to self-regulate using systems put in place by the company
Alphabet CEO Sundar Pichai explained the fact checking process Google has implemented: “Today, when people search on Google for information for Covid-19 vaccines in the United States, we present them with a list of authorized vaccines in their location, with information on each individual vaccine from the FDA or CDC.” The CEO also disagreed with revoking or changing Section 230, with concerns it would be “harming both free expression and the ability of platforms to take responsible action to protect users in the face of constantly evolving challenges.” This also puts the Alphabet CEO on the side of self-regulation.
Twitter CEO Jack Dorsey supported algorithmic transparency. He said “We believe that people should have transparency or meaningful control over the algorithms that affect them. We recognize that we can do more to provide algorithmic transparency, fair machine learning, and controls that empower people. The machine learning teams at Twitter are studying techniques and developing a roadmap to ensure our present and future algorithmic models uphold a high standard when it comes to transparency and fairness.” Twitter have put some initiatives in place, like the independent initiative Bluesky, which aims to develop open and decentralized standards for social media.
This is a start, but independent algorithmic transparency standards need to be implemented across all social media platforms to regulate the content curation clustering algorithms, which group users with similar features, often causing misinformation to reach vulnerable users.
Transparency should be a priority
No Big Tech companies currently disclose the algorithms that determine user content and Microsoft is the only Big Tech company implementing AI transparency standards. AI software development at Microsoft is guided by six principles, which includes transparency, stating that users should be fully aware of the AI system, how it works, and any limitations that could be expected. This type of algorithmic transparency should be implemented across social media.
The proposed EU Digital Services Act (DSA) 2020 aims to establish transparency standards for algorithms and algorithmic audits. It places obligations on platforms to provide transparency on their content curation and moderation operations. This Act is in the development stages, but companies that fail to comply could face fines of up to 6% of their global revenues. The US should implement similar measures to fully address the issue of disinformation. It is the algorithms directing dangerous content towards susceptible users which makes social media misinformation so dangerous.