AI regulation must be focused on hardware to ensure AI safety, warns the University of Cambridge in a new report published 14 February. 

The paper’s researchers argue that data centres and AI chips can offer more effective targets for testing safety.  

Data used in training AI and AI algorithms can be duplicated and therefore disseminated, whereas AI hardware is created and controlled by only a small handful of companies worldwide, say researchers. This makes hardware an effective and tangible intervention point for regulation. 

“Artificial intelligence has made startling progress in the last decade, much of which has been enabled by the sharp increase in computing power applied to training algorithms,” stated Haydn Belfield, a co-lead author of the report. 

“AI supercomputers consist of tens of thousands of networked AI chips hosted in giant data centres often the size of several football fields, consuming dozens of megawatts of power,” Belfield said, adding: “Computing hardware is visible, quantifiable, and its physical nature means restrictions can be imposed in a way that might soon be nearly impossible with more virtual elements of AI.” 

The paper posits three ideas for possible policies. 

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

The first focuses on allocating computing resources for the greater good of society, enforcing limits on computing power, and finally increasing the visibility of AI computing. 

The report also attempts solutions to the problem of industrial espionage that can take place in semiconductor production, suggesting that a unique identifier could be added within hardware to prevent chip smuggling. 

“Trying to govern AI models as they are deployed could prove futile, like chasing shadows,” continued Belfield, “Those seeking to establish AI regulation should look upstream to compute, the source of the power driving the AI revolution.” 

Victor Botev, CTO of AI platform provider Iris.ai, stated that although focusing on AI hardware could lead to holistic safe AI regulation, it would not be sufficient to focus regulation purely on hardware. 

“There are obvious practical reasons for doing so given the physical nature and small number of supply chains, but there are other alternatives to the hardware always playing catch up with the software,” Botev stated. 

“We need to ask ourselves if bigger is always better. In the race for ever bigger large language models, let’s not forget the often more functional domain-specific smaller language models that already have practical applications in key areas of the economy,” he said.