MENU

Call to tag and track high end AI chips

Call to tag and track high end AI chips

Business news |
By Nick Flaherty

Cette publication existe aussi en Français


Researchers are calling for a registry of high performance AI chips as a way to regulate the industry.

A major report from the University of Cambridge is suggesting a range of policy options that includes tagging and registering the use of high end AI chips.

Other technical proposals floated by the report include “compute caps” – built-in limits to the number of interconnects to other AI chips – and distributing a “switch” for AI training across multiple parties to allow for a digital veto of risky AI before it feeds on data.    

The researchers argue that AI chips and datacentres offer more effective targets for scrutiny and AI safety governance rather than data and AI frameworks as this is detectable, excludable, and quantifiable, and is produced via an extremely concentrated supply chain.

However the suggestions in the report, titled Computing Power and the Governance of Artificial Intelligence, could have significant implications for chips designers and producers, from Nvidia and ARM to Intel, AMD and SiPearl as well as up to 80 AI chip startups around the world.

The report is written by 19 AI researchers and co-led by three University of Cambridge institutes – the Leverhulme Centre for the Future of Intelligence (LCFI), the Centre for the Study of Existential Risk (CSER) and the Bennett Institute for Public Policy – along with OpenAI and the Centre for the Governance of AI.

“Artificial intelligence has made startling progress in the last decade, much of which has been enabled by the sharp increase in computing power applied to training algorithms,” said Haydn Belfield, a co-lead author of the report from Cambridge’s LCFI. “Trying to govern AI models as they are deployed could prove futile, like chasing shadows. Those seeking to establish AI regulation should look upstream to compute, the source of the power driving the AI revolution.”

“Governments are rightly concerned about the potential consequences of AI, and looking at how to regulate the technology, but data and algorithms are intangible and difficult to control. “AI supercomputers consist of tens of thousands of networked AI chips hosted in giant data centres often the size of several football fields, consuming dozens of megawatts of power,” said Belfield.

The biggest AI models now use 350 million times more compute than thirteen years ago, say experts, which is behind the recent proposals by Sam Altman to build dedicated AI chip fabs.

“Monitoring the hardware would greatly help competition authorities in keeping in check the market power of the biggest tech companies, and so opening the space for more innovation and new entrants,” said co-author Prof Diane Coyle from Cambridge’s Bennett Institute. 

The policy ideas are divided into three camps: increasing the global visibility of AI computing; allocating compute resources for the greatest benefit to society; and enforcing restrictions on computing power.

For example, a regularly-audited international AI chip registry requiring chip producers, sellers, and resellers to report all transfers would provide precise information on the amount of compute possessed by nations and corporations at any one time.

The report also suggests a unique identifier could be added to each chip to prevent industrial espionage and “chip smuggling”.

“Governments already track many economic transactions, so it makes sense to increase monitoring of a commodity as rare and powerful as an advanced AI chip,” said Belfield. However, the team point out that such approaches could lead to a black market in untraceable “ghost chips”.

Other checks that will impact chip designers include physical limits on chip-to-chip networking, or cryptographic technology that allows for remote disabling of AI chips in extreme circumstances.

One suggested approach would require the consent of multiple parties to unlock AI compute for particularly risky training runs, a mechanism familiar from nuclear weapons.

The report’s authors are clear that their policy suggestions are “exploratory” rather than fully fledged proposals and that they all carry potential downsides, from risks of proprietary data leaks to negative economic impacts and the hampering of positive AI development.

They offer five considerations for regulating AI chips through compute, including the exclusion of small-scale and non-AI computing, regular revisiting of compute thresholds, and a focus on privacy preservation: Computing-Power-and-the-Governance-of-AI

www.cam.ac.uk

 

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News

Share:

Linked Articles
10s