The UK government is expected to announce that Article 22 of the GDPR, absorbed into UK law post-Brexit, could be revoked or rewritten with the aim of boosting AI innovation and market opportunity in the UK. Article 22 details the right to a human review of automated decisions, including profiling, such as the decision to award a loan or job.

Revoking or amending Article 22 would likely be detrimental for algorithmic bias, affecting minorities the most. Removing the right to review could damage innovation rather than aid it, resulting in further algorithmic inequalities. The possible decision is also a marked convergence from the EU and formalized AI regulation, with the EU proposing a landmark attempt to regulate AI in April 2021.

Removing algorithmic bias is integral for achieving the goal algorithms with correct outputs. Revoking Article 22 removes the checks and balances that are necessary to achieve this, whilst also harming public confidence in AI. This in turn may see companies reluctant to adopt AI, the opposite outcome to that planned by the government.

The decision will curb not increase AI innovation

GlobalData forecasts the market for AI platforms will reach $52bn in 2024, up from $29bn in 2019. Culture secretary Oliver Dowden called the potential changes to Article 22 a “data dividend,” helping to increase AI innovation and UK share in the global AI market. However, as AI becomes more pervasive and embedded in life-changing decisions, the need for transparency has intensified. Innovation and regulation must be used in tandem to create workable AI models.

Safeguards are important to reduce algorithmic bias, and ensure existing biases are not exacerbated. Biased AI models are not a positive strategy for innovation, and changes to Article 22 would mean AI models are vulnerable to algorithmic bias becoming engrained in algorithms, limiting innovation and leading to a loss of public confidence. Innovation is also limited by a lack of transparency; if users do not understand the algorithms being used on them, it is unlikely the AI models will become pervasive.

In its review into bias in algorithmic decision-making, the Centre for Data Ethics and Innovation (CDEI) recommended the UK government place mandatory transparency obligations on all public sector organisations using algorithms to make significant decisions affecting citizens’ lives. Altering Article 22 diverges from this and could be detrimental to certain user groups.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

Removing Article 22 would adversely affect minorities

There have been numerous high-profile cases of algorithmic bias negatively affecting minority groups, due to unrepresentative training data. For example, AI-powered recruitment usually works by training algorithms on data about existing employees, reflecting past hiring and promotion patterns. However, this means any past diversity issues are included in the dataset and may be replicated by the algorithm when hiring new talent. This has led to racism and sexism in hiring algorithms for traditionally male dominated industries.

Similarly, when the UK government used algorithms to decide students’ A-level results, many critics said the algorithm was biased and disproportionally affected students from disadvantaged backgrounds. The standardization model was developed by Ofqual used data including the schools’ previous grade distributions, which led to adverse outcomes.

If the Article 22 changes are approved, minorities which receive adverse automated outcomes from algorithms will have no power safeguard themselves from algorithmic bias.

EU regulation prioritizes fair AI

The proposed EU Artificial Intelligence (AI) legislative framework is a landmark attempt to regulate AI. The proposal has various measures, including regulating model input data, based on which the system produces an output. Regulating input data could improve AI explainability and reduce algorithmic bias. Unbalanced data is one of the main reasons for misrepresentations in AI models.

The UK’s proposal of removing the legal framework for human review of algorithmic decisions diverges from the EU’s privacy-centric standpoint. Removing Article 22 would adversely affect minorities and cause inequalities to be exacerbated. Regulatory checks for automated decisions should be kept, to ensure fair and productive AI innovation.