On 1 November, the UK’s iconic Bletchley Park campus is set to welcome around 100 world leaders, industry bigwigs and AI researchers to discuss how to mitigate artificial intelligence’s (AI) risks while trying to make the most out of its potential.

However, many remain sceptical about how relevant the issues being focused on at the landmark event truly are. 

The UK AI Safety Summit will be tackling risks that are extremely troubling, as seen in a report from the UK government last week.

Government research found that the tech could lead to truly terrifying threats, including bioterrorism, advanced AI controlling itself and indistinguishable deep fakes. 

UK Prime Minister Rishi Sunak addressed this horror with a wholly optimistic national speech last week, telling the public not to worry about the issues. The UK had a gameplan, he said, and he was going to make sure the country became the global leader in AI safety.

Of course, to tackle something as wide-reaching and rapidly growing as AI, a big-picture game plan is essential. But industry figureheads have questioned how much of a difference the UK AI Safety Summit can really make while focusing on something so monumental.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

The AI Summit’s agenda faces criticism

The two-day event’s agenda will focus on what the UK government is calling “frontier AI”. This is the type of big-scale AI that everyone can feel nervous about if pondered long enough. It refers to the OpenAI ChatGPTs and Google Bards of the industry, and what they will become.

In the state that it will mostly be discussed, the tech is not currently available but the UK Government says it will be soon due to the rate the technology is moving.

This has received some criticism from experts and businesses who feel like the summit has its priorities all wrong. There are only a select few companies currently operating with anything close to the government’s definition of Frontier AI, for example. 

Sam Lowe, senior manager for privacy and data compliance services at Alvarez & Marsal, told Verdict that the relevance of the summit’s agenda to organisations beyond large-scale AI model developers appears limited.

“For organisations struggling to manage risks when building existing AI models into their business processes, the outcomes appear unlikely to be much help in addressing the current challenges of privacy, intellectual property, bias and fairness, and questions of liability that are often the main areas of concern,” Lowe said. 

“With a focus on future potential risk, the AI Safety Summit is unlikely to do much to address current AI risk challenges for organisations in the here and now,” he added.

Can the UK be an AI safety leader without regulation?

As stated by Laura Petrone, an analyst at research company GlobalData, the UK has taken a cautious approach to AI regulation, refraining from putting in any statutory regulations in “fear of stifling innovation”.

“The EU and China have been the most active in envisaging regulatory frameworks and will likely set the standard for AI regulation over the next few years,” Petrone added.

While the UK’s cautious approach has pleased part of the industry, many businesses have called for more robust guidelines to easily support AI’s adoption.

Natalie Cramp, CEO of data company Profusion, said businesses “really need to help them adopt AI faster is certainty and this comes from clear, well thought out and robust regulations and guidelines”.

Cramp said: “At the moment the UK Government has not put forward a clear strategy – or even vision – of how it intends to regulate AI.”

Lowe echoes this point, claiming that uncertainty is lingering when it comes to a business’s understanding of AI regulation.

“As regulations continue to evolve, there remains a disconnect in how the maturity of academic and research responses to AI ethics and governance issues is applied to industry practices,” Lowe said. 

“Closing this gap will be key in establishing regulation that provides clear and practical support for organisations,” he added.

Tom Cornell, senior psychology consultant at HireVue, told Verdict that the UK “can objectively be seen to be behind in legislating AI, and many would see this take on AI as lacking meaningful action.”

UK representation has also been an issue for some, as the only currently known attending representatives for the UK’s AI industry are Google DeepMind and Stability.

Although the guest list has not been announced, multiple reliable reports have reported that the US tech giants will be represented strongly, including OpenAI CEO Sam Altman and X (formerly Twitter) owner Elon Musk.