The US should only sell NVIDIA’s artificial intelligence (AI) chips to those that agree to ethically use the emerging technology, Google DeepMind’s co-founder Mustafa Suleyman said in an interview with the Financial Times on Friday. 

Suleyman, who has been a constant advocate of tougher AI regulation, urged the US to enforce the standards presented by leading AI companies in July. 

The DeepMind co-founder was referring to the voluntary commitments made by companies including Alphabet, Meta and OpenAI to keep AI as safe as possible.

These included pledges such as including an AI watermark to help with deepfake monitoring, as well as allowing external tests to be done on soon to be released AI systems. 

“The US should mandate that any consumer of Nvidia chips signs up to at least the voluntary commitments and more likely, more than that,” Suleyman told the publication. 

“That would be an incredibly practical chokepoint that would allow the US to impose itself on all other actors,” he added.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

Suleyman’s calls come as lawmakers in the US, and countries across the world, scramble to get a regulatory hold on the rapid development of AI. 

According to Suleyman, the “exponential trajectory” of AI means that large language models will be 100 times more powerful than OpenAI’s GPT-4 in just two years’ time. 

Suleyman added that he believes too much of the threat focus is surrounding superintelligence, which he says is a “huge distraction”.

In May, ex-Google CEO, Eric Schmidt, claimed many people could be “harmed or killed” by artificial intelligence (AI) if it was not regulated properly.

This thought process has been backed up by thousands of industry figure heads, including Elon Musk, who joined together to call for a six month development pause to AI in April.

“Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us?” the letter read.

However, Suleyman argues that the industry should be “focused on the practical near-term capabilities which are going to arise in the next 10 years”.

Research firm GlobalData’s 2023 AI Thematic Intelligence predicted that the global specialised AI applications market will be worth $146bn in 2030, up from $31.1bn in 2022, growing at a compound annual growth rate of 21.3%.