The UK is at risk of falling behind the “pace of development” of AI if regulation is not passed shortly, warn a group of MPs in a recent report. 

The report by the UK’s Science, Innovation and Technology Committee states that although the UK’s longstanding history of being a technological leader makes the country ripe to achieve a global standard of AI regulation, legislature needs to be released sooner. 

The UK government has already attempted to highlight the UK’s tech history by announcing Bletchley Park as the location of the upcoming AI Safety Summit. 

In the report, the committee also calls for collaboration on AI regulation between similar countries to the UK that share “liberal, democratic values”. 

Reinforcing the speed at which AI is developing, the MPs chose to write the introductory paragraph to the report with OpenAI’s ChatGPT. 

At first glance, the paragraph reads well, and the AI is able to capture the key themes discussed in the report. The committee explains that the use of ChatGPT was to further exemplify how ubiquitous and “general-purpose” the technology has become. 

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData

Ongoing GlobalData surveys show that around 17% of businesses polled recorded that they had a very high adoption rate of AI into their services as of August 2023. This was up from 11% in June. 

Despite an increase in AI adoption, business sentiment around the data privacy of AI remains low. 

Over 50% of business polled by GlobalData stated that they were very concerned about possible data security risks from using AI software. 

Speaking on the cybersecurity risk of AI, tech expert and CEO of Trachet, a business advisory, Claire Trachet warns that whilst generative AI continues to be a buzz for many businesses looking to get ahead, its “fast-growing nature” has made AI difficult for governments to regulate. 

“Even though there are some forms of risk management and different reports coming out, none of them are true co-ordinated approaches,” Trachet said. 

AI regulation needs to be balanced, according to Trachet.

Stimulating innovation and mitigating data privacy risks both require an equal amount of investment, adds Trachet.

The committee’s report reaffirms that the sudden rush of development and progress within AI has surprised “even well-informed observers”, making it a difficult technology to accurately predict. 

Despite this, the report concludes that AI should not be viewed “as a form of magic” or sentient, but rather emphasises the importance of education around AI. AI is very clearly defined as a tool or model that is solely instructed by humans to help perform tasks. 

However, its general usage among businesses can help spread misinformation or even algorithmic bias.