Artificial intelligence (AI) could be powerful enough to produce advances that “kill many humans” within two years – and society may have just two years to save the world from it, according to a stark warning from Rishi Sunak’s advisor. 

Matt Clifford said that there are lots of different near-term and long-term risks when it comes to AI and even “the near-term risks are actually pretty scary”.

In an interview with the UK’s TalkTV, Sunak’s advisor said that AI taking over human intelligence in two years was at the “bullish end of the spectrum”.

Clifford said he believed there was a “zero” percentage chance that AI would wipe out the whole of humanity.

However, he added: “If we go back to things like the bioweapons or cyber (attacks), you can have really very dangerous threats to humans that could kill many humans – not all humans – simply from where we would expect models to be in two years’ time.”

In a Twitter update, Clifford reaffirmed the points he made in the interview while insisting that there was a lot of nuance to the wide-ranging topic.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData

“Short and long-term risks of AI are real and it’s right to think hard and urgently about mitigating them but there’s a wide range of views and a lot of nuance here, which it’s important to be able to communicate,” Clifford wrote.

The warning comes after an open letter penned by a group of leading CEOs, engineers and researchers warned about the fatal threat AI possesses.

“Mitigating the risk of extinction from artificial intelligence should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” it read.

Published by the Centre for AI Safety, a US-based non-profit; the letter was signed by industry heavyweights including OpenAI CEO Sam Altman and Google DeepMind CEO Demis Hassabis. 

“The kind of existential risk that I think the letter writers were talking about is about what happens once we effectively create a new species, an intelligence that is greater than humans,” Clifford said.

GlobalData is the parent company of Verdict and its sister publications.