The day before a summit on the safety of AI in the UK, the US president has issued an executive order on the topic.
The order, under the US Defense Production Act, will require that developers of the most powerful AI systems share their safety test results and other critical information with the US government. The same act was used to commandeer production for ventilators during the Covid-19 pandemic.
The order also calls on the US National Institute of Standards and Technology (NIST) to set the rigorous standards for extensive red-team testing to ensure safety before public release, developing standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy.
The Department of Homeland Security will apply those standards to critical infrastructure sectors and establish the AI Safety and Security Board. “Together, these are the most significant actions ever taken by any government to advance the field of AI safety,” said the administration.
This comes as vice president Kamala Harris is due at Bletchley Park in the UK tomorrow for an AI safety summit.
- ChatGPT leaking Samsung chip secrets is iceberg’s tip
- ChatGPT can now see, hear, and speak
- Researchers unveil first Chat-GPT-designed robot
UK prime minister Rishi Sunak announced last week the world’s first AI Safety Institute examine, evaluate and test new types of AI so there is an understanding of what each new model is capable of. It will look to share information with international partners, policymakers, private companies, academia and civil society as part of efforts to collaborate on AI safety research.
On Friday the UK government persuaded AI firms including DeepMind to outline their safety policies following a request from the Technology Secretary last month. The UK government also proposed a set of emerging safety processes for the companies, providing information on how they can keep models safe and is intended to inform discussions at Bletchley Park.
The paper outlines practices for AI companies including implementing responsible capability scaling – a new framework for managing frontier AI risk and something several are already putting into action. This would see AI firms set out ahead of time what risks are going to be monitored, who is notified if these risks are found, and at what level of dangerous capabilities a developer would slow or, in fact, pause their work until better safety mechanisms are in place.
Other suggestions include AI developers employing third parties to try to hack their systems to identify sources of risk and potential harmful impacts, as well as providing additional information on whether content has been AI generated or modified. This is the area that the US is seeking to regulate directly.
“This is the start of the conversation and as the technology develops, these processes and practices will continue to evolve, because in order to seize AI’s huge opportunities we need to grip the risks,” said UK Technology Secretary Michelle Donelan.
“We know openness is key to increasing public trust in these AI models which in turn will drive uptake across society meaning more will benefit, so I welcome AI developers publishing their safety policies today.
Safety best practices have not yet been established for frontier AI development – which is why the UK Government has published emerging processes to inform the vital discussion of safe frontier AI at the summit and why the US executive order will see NIST oversee much of the work.
- Report details cybercrime potential of AI driven chatbots
- State of AI in automotive report
- Rapid Silicon lets engineers use GPT for FPGA design
NIST already determines the requirements for cryptography systems and one aim of the US executive order is to protect against AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content. The US Department of Commerce will develop guidance for content authentication and watermarking to clearly label AI-generated content. Federal agencies will use these tools to authenticate government communications and set an example for the private sector and governments around the world.
The US is also to set up a cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software while the UK is setting a taskforce to address the issue.
“While Frontier AI brings opportunities, more capable systems can also bring increased risk. AI companies providing increased transparency of their safety policies is a first step towards providing assurance that these systems are being developed and deployed responsibly,” said Ian Hogarth, chair of the UK’s Frontier AI Taskforce.
“Over the last few months the UK Government’s Frontier AI Taskforce has been recruiting leading names from all areas of the AI ecosystem, from security to computer science, to advise on the risks and opportunities from AI.”
The US says it aims to accelerate development and implementation of vital AI standards with international partners and in standards organizations, ensuring that the technology is safe, secure, trustworthy, and interoperable.