1. Analysis
July 19, 2022updated 20 Jul 2022 2:31pm

UK new AI rulebook accused of being “vague” and “appeasing Big Tech”

The UK wants to become an AI superpower after Brexit, but analysts fear it is going about it in the wrong way

By Chloe Olivia Sladden

The government wants to transform the UK into an “AI and science superpower”. It has now unveiled a new policy aimed to make that vision a reality. However, analysts criticise the proposals for “appeasing Big Tech” and for being too ambiguous to work in the long run.

Earlier this week, the UK government announced a new AI rulebook. Launched alongside its new national AI strategy, the rulebook aims to provide insights into how the government plans to regulate the AI sector. This apparently means adopting a light touch to ensure that the sector can thrive.

Instead of using one central regulator like the EU does, the UK AI rulebook aims to empower several “different regulators to take a tailored approach to the use of AI in a range of settings.”

Different regulators will be asked to individually interpret and implement the principals laid out by the government. Market watchdogs that could be asked to regulate the market include Ofcom, the Competition and Markets Authority, the Information Commissioner’s Office, the Financial Conduct Authority, and the Medicine and Healthcare Regulatory Agency.

“We want to make sure the UK has the right rules to empower businesses and protect people as AI and the use of data keeps changing the ways we live and work,” digital minister Damian Collins said in a statement. “It is vital that our rules offer clarity to businesses, confidence to investors and boost public trust. Our flexible approach will help us shape the future of AI and cement our global position as a science and tech superpower.”

The UK AI rulebook lists six core principles for regulators to adhere. The first is to ensure that AI is used safely. The second is to ensure that AI is technically secure and functions as designed. The third aims to make sure that the technology is transparent and explainable.

Market watchdogs are also asked to consider fairness, identify a legal person responsible for AI and to “clarify routes to redress or contestability”.

UK AI rulebook condemned for playing into the hands of Big Tech

While the government is confident that the rulebook will cement the UK’s place as an AI superpower, analysts have condemned it for being vague, toothless and for playing straight into the hands of Big Tech.

“The new ‘pro-innovation’ UK AI guidelines are clearly aimed at appeasing Big Tech and encouraging AI investment in the UK,” Sarah Coop, thematic analyst at GlobalData, tells Verdict. “The drafts are still vague, but the proposal to be ‘proportionate and adaptable’ where ‘we will ask that regulators consider lighter touch options, such as guidance or voluntary measures’ suggests there will be no real pressure on Big Tech to change its AI systems in the short term.”

While the UK government may be happy to contrast the proposals to the EU’s approach, Coop believes that this is doing it no favours. She argues EU’s privacy-centric approach to tech regulation, such as the Digital Markets Act (DMA), “prohibits AI systems that are too high risk, including banning black box algorithms that humans cannot interpret.”

Coop also notes that the likes of Google and Apple had aggressively argued against EU lawmakers giving the DMA its thumbs up at the beginning of July.

The government suggested that the new AI rulebook would enable it to better leverage the power of Brexit. Emma Taylor, thematic analyst at GlobalData, believes that while these efforts are commendable, the “policy feel like box-ticking to comply with increasing pressure to regulate Big Tech and ensure governments have watertight AI policies regarding transparency and ethics.”

“The UK is undoubtedly hoping that this digital strategy will counteract its lack of skilled workers by encouraging innovation and attracting talent, a lot of which has been driven away by tighter immigration controls,” Taylor says.

She can understand that the “regulations are purposely vague to ensure they have a long shelf life”, which “is indicated by the refusal to set out a universally applicable definition of AI for fear of it not encompassing future technology.” However, Taylor warns that this “long-term thinking is inappropriate when regulating technology which is adapting so rapidly.”

“In practice, this type of policy and regulation should be reviewed in conjunction with established AI organisations continuously, and be precise and actionable, instead of ambiguous and performative,” Taylor concludes.

The government did not respond to requests to comment on this story.

GlobalData is the parent company of Verdict and its sister publications.