R&D project develops security guidelines for AI applications in cars

R&D project develops security guidelines for AI applications in cars

Technology News |
By Christoph Hammerschmidt

The German cybersecurity authority BSI and the technology company ZF have launched a joint project to research how the IT security of AI systems in cars can be tested according to generally accepted criteria. The findings from the project are to be used in the medium term to develop test criteria for AI systems in the form of a modular technical guideline.

With the introduction of continuously connected cars, cybersecurity is becoming one of the most pressing issues for developers of electronic vehicle systems. This is especially true in connection with AI applications, because their cyber-technical security is not easy to verify due to intransparent AI algorithms. Technical guidelines and standards are still largely lacking. According to the project partners, the guidelines to be developed jointly by the BSI and ZF will be incorporated into the development of future security checks for cars and international standardisation efforts. Until September of this year, ZF’s AI Lab will implement the project in cooperation with the BSI as the customer. TÜV Informationstechnik GmbH (TÜViT), a TÜV Nord Group company specialising in IT security, is also involved as a further project partner.

“If you want to provide optimal protection, you have to know exactly how attackers proceed,” says Dr Georg Schneider, who heads ZF’s AI Lab Saarbrücken (Germany) and is responsible for the project there. This requires the permanent analysis and evaluation of attack strategies. Therefore, attacks are simulated in the lab and the reaction of AI systems is analysed. These are different methods of so-called adversarial attacks on a current AI-based traffic sign recognition system. These attacks attempt to manipulate optical signals in order to make AI systems perform undesired operations. For example, a manipulated speed 60 sign could be interpreted as a speed 120 sign. A comprehensive understanding of such attack tactics is a prerequisite for developing successful defence strategies.

In order for the advantages of AI systems to be utilised in mobility, safe use must be ensured. The secure use of AI systems in mobility also requires tools with which these systems can be checked for risks and how to deal with them. “Suitable concepts and methods for this are not yet available or not fully developed,” says Dr Silke Bargstädt-Franke, head of the Cyber Security in Digitalisation department at the BSI. This is new technology that implements complex use cases based on the interaction of classic software, AI systems, sensors and the physical environment. “Here, testing standards cannot be developed on paper. We need partners from practice to be able to identify possible practical problems in operation,” says Bargstädt-Franke.

In the project that has now been started, the testing and further development of corresponding requirements, methods and tools is to be prepared in order to then carry them out specifically on two use cases in a follow-up project. The medium-term goal is to create a modular technical guideline based on the findings. This can be used, for example, as a basis for international standardisation efforts within the framework of the UNECE World Forum for Harmonisation of Vehicle Regulations.

Related articles

NXP certified to cybersecurity standard ISO 21434

Partnership enhances autonomous vehicle cybersecurity

Connected car OTA spec enhances data gathering, cybersecurity



If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News


Linked Articles