The UK has published AI safety guidelines today (27 November) that have been signed by 18 countries around the world. 

The guidelines were written in partnership with industry insiders and detail how AI can be designed and deployed safely. They are the first of their kind to be agreed upon globally. 

The signee countries include the US, Australia, Nigeria, South Korea and Japan among others.

The guidelines will be officially launched today (25 Nov) at a the UK’s National Cyber Security Centre (NCSC) event that will include panellists such as the Alan Turing Institute, Microsoft representatives and other cybersecurity agencies from countries such as Germany. 

The guidelines state that AI developers must model potential threats to their system and consider cybersecurity “holistically”. 

In addition to preventing cybersecurity attacks, software developers must also be aware of the potential knock-on effects to wider society if their AI system is breached. Security must be as high a priority in AI development as functionality and performance, according to the recommendations.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

Moreover, the guidelines also advise businesses to raise staff awareness of AI cybersecurity threats. 

More and more businesses are turning to AI solutions. In a 2023 GlobalData survey, around 17% of businesses stated that they had a high level of integration of AI software into their workflows.  

This interest in AI is not confined to the technology sector, as more workers who have no previous training in AI are being expected to use it in their work. 

Despite this readiness to adopt AI, research has suggested that businesses are not properly informed of the potential risks posed to them by using the technology. In a survey by cybersecurity company ExtraHop, only 36% of businesses stated that cybersecurity was a top concern in their AI adoption plans. 

NCSC CEO Lindy Cameron spoke on the swift adoption of AI into business. 

“We know that AI is developing at a phenomenal pace and there is a need for concerted international action, across governments and industry, to keep up,” she stated.

“These guidelines mark a significant step in shaping a global, common understanding of the cyber risks and mitigation strategies around AI to ensure that security is not a postscript to development but a core requirement throughout,” added Cameron.

GlobalData Principal analyst, Laura Petrone, stated that a “secure by default” approach was an important step for global AI safety and development.

“However, the document only includes recommendations and falls short of providing any obligations for providers of AI systems,” Petrone told Verdict, “the UK wants to become an agenda-setter in AI governance, but without legislation, it risks falling behind the EU, China, and now the US, all currently committed to developing their own rules on AI safety.

Writing for Fast Company in October 2023, Microsoft VP of security, Vasu Jakkal, wrote that AI would bring about a renaissance for both attackers and defenders in the cybersecurity sector. 

“In cybersecurity, AI capabilities will be revolutionary,” she wrote, “We can see and react to attackers in real time, already knowing what the threat is and what the exploit is attempting to do,” according to Jakkal.

But AI, she warned, could also give way to more personalised cyberattacks. 

As AI technology becomes more embedded in businesses, households and culture, Jakkal stated that without interference cyberattacks would become more devastating “and will erode the trust” between technology and users. 

GlobalData’s executive briefing on AI, states that every tech company will need its own governance plan on AI ethics.

In its ranking of critical regulatory pressures facing tech companies, AI ethics was rated the biggest challenge to Big Tech, but it was also the area where every company examined had the least level of preparation.