Britain’s National Cyber Security Centre (NCSC) has cautioned businesses on the dangers of incorporating machine learning (ML) and large language models (LLMs) into their services.

LLMs are algorithms that use ML to summarise, generate and predict new content. 

Access deeper industry intelligence

Experience unmatched clarity with a single platform that combines unique data, AI, and human expertise.

Find out more

Since ChatGPT’s release in late 2022, the unprecedented popularity of the chatbot has seen businesses integrating LLMs into their products.

In two blog posts on Wednesday, officials stated: “Organisations building services that use LLMs need to be careful, in the same way they would be if they were using a product or code library that was in beta.”

The NCSC identified two potential weak spots in LLMs that could be exploited by attackers: data poisoning attacks and prompt injection attacks.

Prompt injection attacks involve an input designed to cause the model to ‘generate offensive content, reveal confidential information, or trigger unintended consequences in a system.’

GlobalData Strategic Intelligence

US Tariffs are shifting - will you react or anticipate?

Don’t let policy changes catch you off guard. Stay proactive with real-time data and expert analysis.

By GlobalData

Data poisoning, meanwhile, relies upon the inherent weakness of machine learning: the vast training data the model requires. Data is typically sourced from the internet and can include content that is inaccurate or controversial.

In 2021, Google ethics researchers predicted the risks of data poisoning for LLMs in a paper titled  ‘On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?’

The paper found that LLMs are likely to absorb worldviews belonging to dominant groups from their training data. Harmful stereotypes against women and minorities risk being embedded in algorithms that are trained on datasets that do not represent all people.

US President Joe Biden said in April that artificial intelligence (AI) “could be” dangerous and technology companies should ensure that their products are safe

The rising concern around AI was highlighted by an open letter by the Future of Life Institute, which said: “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”

The letter, titled ‘Pause Giant AI Experiments,’ has now received over 33,000 signatures.