MENU

Intel and Georgia Tech partner on mitigating ML deception attacks

Intel and Georgia Tech partner on mitigating ML deception attacks

Technology News |
By eeNews Europe



Together, Intel and Georgia Tech have been selected to lead a Guaranteeing Artificial Intelligence (AI) Robustness against Deception (GARD) program team for DARPA, which aims to develop a new generation of defenses against adversarial deception attacks on ML models and applications. The program is designed to address vulnerabilities inherent in ML platforms, particularly in terms of altering, corrupting, or deceiving these systems, such as tricking the ML used by a self-driving car by making visual alterations to a stop sign.

“Intel and Georgia Tech are working together to advance the ecosystem’s collective understanding of and ability to mitigate against AI and ML vulnerabilities,” says Jason Martin, principal engineer at Intel Labs and principal investigator for the DARPA GARD program from Intel. “Through innovative research in coherence techniques, we are collaborating on an approach to enhance object detection and to improve the ability for AI and ML to respond to adversarial attacks.”

In the first phase of GARD, Intel and Georgia Tech are enhancing object detection technologies through spatial, temporal, and semantic coherence for both still images and videos. These three defining qualities of object detectors look for contextual clues to determine if a possible anomaly or attack is occurring. While no known real-world attacks have been made on these systems, say Georgia Tech researchers, they first identified security vulnerabilities in object detectors in 2018 with a project known as ShapeShifter.

The ShapeShifter project, say the researchers, exposed adversarial machine learning techniques that were able to mislead object detectors and even erase stop signs from autonomous vehicle detection.

“As ML technologies have developed, researchers used to think that attacking object detectors would be difficult,” says Polo Chau, who serves as the lead investigator from Georgia Tech on the GARD program. “ShapeShifter showed us that was not true, they can be affected, and we can attack them in a way to have objects disappear completely or be labeled as anything we want. The reason we study vulnerabilities in ML systems is to get into the mindset of the bad guy in order to develop the best defenses.”

“Our research develops novel coherence-based techniques to protect AI from attacks,” says Chau. “We want to inject common sense into the AI that humans take for granted when they look at something. Even the most sophisticated AI today doesn’t ask, ‘Does it make sense that there are all these people floating in the air and are overlapping in odd ways?’ Whereas we would think it’s unnatural. That is what spatial coherence attempts to address – does it make sense in a relative position?”

The idea of applying common sense to AI object recognition extends to other coherence-based techniques, say the researchers, such as temporal coherence, which checks for suspicious objects’ disappearance or reappearance over time. Further, the researchers’ UnMask semantic coherence technique, which is based on meaning, looks to identify the parts of an object rather than just the whole, and verifies that those parts indeed make sense.

The goal of all three coherence-based techniques, say the researchers, is to force attackers to adhere to all categories’ laws created for continuity in the AI. This multi-perspective approach thwarts any future attempts by adversarial ML that do not meet the complex rules, causing any security breach to be flagged.

Intel

Related articles:
How simple it is to fool AI algorithms for autonomous vehicles
DARPA looks to teach machines common sense
Common sense AI is goal of new research project
IEEE CS unveils its top 12 technology trends for 2020

 

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News

Share:

Linked Articles
10s