
NIST in the US has detailed the types of attacks that generative AI is vulnerable to, along with detailed approaches for mitigation and warns of ‘snake oil’
AI systems can malfunction when exposed to untrustworthy data, and attackers are exploiting a number of issues says the US National Institute of Standards and Technology (NIST).
Despite the explosion of AI systems demonstrated at CES 2024 this week, from generative AI to edge AI and self-driving systems, there is no foolproof method for protecting AI from misdirection, and AI developers and users should be wary of any who claim otherwise, it says.
An AI system can malfunction if an adversary finds a way to confuse its decision making. Adversaries can deliberately confuse or even poison AI systems to make them untrustworthy and there’s no foolproof defence that developers can employ.
“We are providing an overview of attack techniques and methodologies that consider all types of AI systems,” said NIST computer scientist Apostol Vassilev, one of the publication’s authors. “We also describe current mitigation strategies reported in the literature, but these available defenses currently lack robust assurances that they fully mitigate the risks. We are encouraging the community to come up with better defences.”
One major issue is that the data itself may not be trustworthy. Its sources may be websites and interactions with the public. There are many opportunities for bad actors to corrupt this data — both during an AI system’s training period and afterward, while the AI continues to refine its behaviours by interacting with the physical world. This can cause the AI to perform in an undesirable manner. Chatbots, for example, might learn to respond with abusive or racist language when their guardrails get circumvented by carefully crafted malicious prompts.
“For the most part, software developers need more people to use their product so it can get better with exposure,” Vassilev said. “But there is no guarantee the exposure will be good. A chatbot can spew out bad or toxic information when prompted with carefully designed language.”
In part because the datasets used to train an AI are far too large for people to successfully monitor and filter, there is no foolproof way as yet to protect AI from misdirection. To assist the developer community, the report offers an overview of the sorts of attacks its AI products might suffer and corresponding approaches to reduce the damage of untrustworthy AI.
The report, which includes researchers from Robust Intelligence, break down each of these classes of attacks into subcategories and add approaches for mitigating them, though the publication acknowledges that the defences AI experts have devised for adversarial attacks thus far are incomplete at best. Awareness of these limitations is important for developers and organizations looking to deploy and use AI technology, Vassilev said.
The report considers the four major types of attacks to make AI untrustworthy: evasion, poisoning, privacy and abuse attacks. It also classifies them according to multiple criteria such as the attacker’s goals and objectives, capabilities, and knowledge.
- Evasion attacks, which occur after an AI system is deployed, attempt to alter an input to change how the system responds to it. Examples would include adding markings to stop signs to make an autonomous vehicle misinterpret them as speed limit signs or creating confusing lane markings to make the vehicle veer off the road.
- Poisoning attacks occur in the training phase by introducing corrupted data. An example would be slipping numerous instances of inappropriate language into conversation records, so that a chatbot interprets these instances as common enough parlance to use in its own customer interactions.
- Privacy attacks, which occur during deployment, are attempts to learn sensitive information about the AI or the data it was trained on in order to misuse it. An adversary can ask a chatbot numerous legitimate questions, and then use the answers to reverse engineer the model so as to find its weak spots — or guess at its sources. Adding undesired examples to those online sources could make the AI behave inappropriately, and making the AI unlearn those specific undesired examples after the fact can be difficult.
- Abuse attacks involve the insertion of incorrect information into a source, such as a webpage or online document, that an AI then absorbs. Unlike the aforementioned poisoning attacks, abuse attacks attempt to give the AI incorrect pieces of information from a legitimate but compromised source to repurpose the AI system’s intended use.
“Most of these attacks are fairly easy to mount and require minimum knowledge of the AI system and limited adversarial capabilities,” said co-author Alina Oprea, a professor at Northeastern University. “Poisoning attacks, for example, can be mounted by controlling a few dozen training samples, which would be a very small percentage of the entire training set.”
“Despite the significant progress AI and machine learning have made, these technologies are vulnerable to attacks that can cause spectacular failures with dire consequences,” he said. “There are theoretical problems with securing AI algorithms that simply haven’t been solved yet. If anyone says differently, they are selling snake oil.”
Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations (NIST.AI.100-2) i
