How AI impacts the qualification of safety-critical automotive software

How AI impacts the qualification of safety-critical automotive software

Technology News |
By Nick Flaherty

As the automotive industry continues its push toward software-defined vehicles, development teams must understand how artificial intelligence (AI) and machine learning (ML) affect them.

The growing use of AI and ML  technologies in automotive software, both on the manufacturing floor and onboard vehicles, requires rethinking the approach functional safety processes and certification, says Mark Pitchford, technical specialist at LDRA

Manufacturers and suppliers have already begun the work, and functional safety standards are following close behind. AI/ML is being used in automotive and the steps being taken to maintain safety-critical software practices within the industry.

Figure 1. Test vectors created by the LDRA tool suite (source: LDRA)

Generative AI has fueled much confusion and uncertainty around AI/ML, even as manufacturers like Mercedes-Benz are using the technology. Adopting a broader scope of these technologies helps to encompass all potential possibilities for safety-critical automotive software.

In general, AI can be classified into two categories:

  • Narrow (or weak) AI fits the specific tasks it has been trained for but lacks general human-like intelligence.
  • General (or strong) AI performs a variety of functions and can teach itself how to solve new problems in ways that mimic human intelligence.

Automotive software teams have been using narrow AI for decades. Consider an airbag ECU or a digital instrument cluster, each of which is built to perform specific tasks and often falls under traditional hazard analysis and risk assessment processes. More recently, self-driving systems employ AI/ML to ingest sensor data and adapt to their environments for controlling functions like steering and braking.

AI/ML applications also extend into software development environments. Automated unit test vector generation is a form of narrow AI as it simulates intelligent behaviour by deriving test stubs and control from existing code. Figure 1 shows an example set of test vectors created automatically through software, eliminating the need for humans to spend time creating them.

Audi worked with Intel and Nebbiolo on an AI-based proof-of-concept (POC) to improve quality control processes for the welds on its vehicles. The POC took place at Audi’s factory in Neckarsulm, Germany, which has 2,500 autonomous robots on its production line. Each robot uses tools, from glue guns to screwdrivers, to perform a specific vehicle assembly task. With up to 1,000 vehicles assembled per day and 5,000 welds per unit, Audi sought to improve quality control scale and efficiency.

Before the POC, Audi used the industry’s standard manual sampling method of pulling one car off the line daily to test welding spots and record their quality. The objective of the POC was to inspect all 5,000 welds for every car and infer the results of each weld within microseconds.

Figure 2. Architecture of Audi’s welding-gun quality analysis solution (Source: Intel)

Toward this end, developers created and trained an ML algorithm for accuracy by comparing its predictions with real inspection data. The software used data on electric voltage and current curves during the welding operation, configurations of the welds, types of metal, and the health of the electrodes. Models were deployed at the assembly line and cell levels, creating systems that could predict poor welds before they were performed, as illustrated in Figure 2 below.

It is highly unlikely that the ML element of this control system was developed in accordance with a traditional functional safety standard, such as the V-model associated with IEC 61508: “Functional safety of electrical/electronic/programmable electronic safety-related systems.” The risks of erroneous outputs were likely minimal as there were known parameters (specifications of healthy electrodes, metal types, etc.) and expert technicians well-placed to mitigate any flawed detection of a faulty weld.

The automotive industry’s approach to incorporating AI/ML into standards is to define best practices for AI/ML applications that enable developers to fulfill the objectives of long-standing, higher-level standards such as ISO 26262. As shown in Figure 3, two upcoming standards illustrate this approach:

  • ISO/CD PAS 8800, Road Vehicles – Safety and artificial intelligence: This document will define safety-related properties and risk factors impacting any insufficient performance and malfunctioning behavior of AI. It includes deriving safety requirements, considerations of data quality and completeness, and architectural measures for the control and mitigation of failures.
  • ISO/CD TS 5083, Road Vehicles – Safety for automated driving systems – Design, verification, and validation: This document will provide an overview and guidance on the steps for developing and validating an automated vehicle equipped with a safe automated driving system. It considers safety by design, cybersecurity, and verification and validation methods for automated driving focused on SAE level 3 and level 4 vehicles according to ISO/SAE PAS 22736.

Figure 3. Existing and proposed standards relating to safety and AI automotive (Source: LDRA)

Mitigating AI/ML risks in safety-critical applications

There are few, if any, safety-critical systems based entirely on AI/ML. More common is the integration of AI/ML-based elements within conventionally engineered systems, where the concepts of domain separation and “freedom from interference” can be applied.

ISO 26262 defines freedom from interference as the “absence of cascading failures between two or more elements that could lead to the violation of a safety requirement.” An “element” is a “system or part of a system including components, hardware, software, hardware parts, and software units”. “Cascading failure” is a “failure of an element of an item causing another element or elements of the same item to fail.” Constraining AI/ML algorithms to their own elements enables other elements to receive AI/ML-generated data with the proper handling to mitigate any risks associated with said data.

As illustrated below, traditional taint analysis tools can test and validate data flows coming from AI/ML elements. Developers can check incoming and outgoing data, based on data ranges and control flows, to determine the potential safety risks between connected elements. They can also include these tests within a larger strategy of demonstrating freedom from interference within a system.


Figure 4. Taint analysis using the LDRA tool suite. (Source: LDRA)

From optimizing manufacturing processes to controlling driver functions, AI/ML-based systems are the new frontier for automotive manufacturers. Despite the grand promise of AI/ML acting with human intelligence, automotive software teams must still apply functional safety processes and tools to minimize risks.

For the foreseeable future, AI/ML components will operate inside conventionally developed software and rely on having humans in the loop. New safety standards are being developed to guide teams on the future of these technologies, and developers must know how to combine them with traditional verification techniques to achieve their goals.




If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News


Linked Articles