A Developer’s View of ISO 26262

A Developer’s View of ISO 26262

Technology News |
By eeNews Europe

The car in which Mary was travelling in 1869 is not reported to have had software-controlled systems, but today an often-quoted comparison states that a modern airliner contains about 7 million lines of software code, whereas a modern car contains 20 million. In the past much of the car’s software has been related to non-safety-critical applications such as infotainment, but increasingly, ADAS systems and cars with semi-autonomous capabilities are making use of software in applications that directly affect safety.

ISO published the ISO 26262 standard in 2011/2012. This standard recommends tools, techniques, and methodologies for developing such systems and affects many departments within an organisation producing software for cars. This article provides an introduction to the standard from the point of view of the system designer and implementer and is based on QNX Software System’s recent experience certifying its operating system to ISO 26262.

ISO 26262 at a glance

There is an old joke about someone asking the way to a destination and being told “well, if I wanted to go there, I wouldn’t start from here”. This quandary also applies to ISO 26262, which is based on the IEC 61508 standard, Functional Safety of Electrical/Electronic/Programmable Electronic Safety-related Systems. The linkage between the two standards is beyond the scope of this article, but any reader who wants a deep understanding of ISO 26262 should first study IEC 61508.

ISO 26262 applies the techniques of IEC 61508 to electrical and electronic systems that provide functional safety in production passenger cars lighter than 3500 kg. It does not apply to trucks, buses, special-purpose vehicles, or cars adapted in some way (e.g., for disabled drivers).

“Functional safety” is a key concept: safety can be provided in a system in several ways, and functional safety describes an architecture where the safety component has to continue functioning to maintain the overall safety of the system. The component may have to function continuously or only on demand, but the safety of the system relies on it functioning when required. So a headrest that prevents whiplash injury certainly provides safety, but not functional safety.

ISO 26262 itself comes in 10 parts that cover the various phases of development: concept, system-level, hardware development, software development, and production.

Automotive Safety Integrity Levels

No system is completely safe: 100% safety is not assured even if the car never moves. ISO 26262 recognises this by associating an Automotive Safety Integrity Level (ASIL) with each system and recommending more stringent procedures for the higher ASILs.

The key analysis for any safety-critical development is the Hazard and Risk Analysis; it determines the safety requirements—the mitigations identified against each risk—and the residual risks. The residual risks will then determine the ASIL. Part 3 of ISO 26262 outlines some ways in which hazards and risks may be determined, but does not refer to IEC 61882, Hazard and operability studies, even though IEC 61508 does so.

The ASIL has a value of A (lowest) to D (highest) and is determined by combining three different characteristics of potentially hazardous events, as defined in part 3 of ISO 26262:

  1. What harm could occur were the system to fail? The list of possibilities given in ISO 26262 requires a medical degree for full understanding: e.g., “spinal fractures below the fourth cervical vertebra with damage to the spinal cord, intestinal tears, cardiac tears, more than 12 hours of unconsciousness including intracranial bleeding”. The outcome of this medical analysis is a value ranging from S0 (no injuries) to S3 (life-threatening injuries).
  2. What is the likelihood of meeting the condition in which the device is required to operate? This results in a value between E0 (incredible) and E4 (high probability). Thus, a device to protect the occupants of the car against meteor strikes might be classified E0 while a device to assist with braking on a wet road might be classified as E3 (medium probability: 1% to 10% of average driving time)
  3. What is the probability that the pedestrian or driver, depending on who is at risk, could control the event? This gives a value ranging from C0 (controllable in general) to C3 (difficult to control or uncontrollable).

Once these three values are determined, the ASIL follows directly from a table in ISO 26262. For example, a system classified as S2 (severe or life-threatening injuries where survival is probable), E4 (high probability) and C2 (normally controllable) is ASIL B.

Note that for a whole range of classifications (e.g., S3, E1, C2), no ASIL is defined and ISO 26262 is not applicable. Also, only one combination, S3/E4/C3, results in ASIL D.

Part 9 of ISO 26262 is dedicated to design patterns for decomposing systems into subsystems and assigning the ASIL for each subsystem.

Design and Implementation

Once the ASIL for the system has been determined from the Hazard and Risk Analysis, the designer has to select an appropriate architecture and the appropriate tools to validate that architecture.

To assist in this process, part 6 of ISO 26262 provides tables of tools and techniques and recommends or highly recommends them at each ASIL. Unlike IEC 61508, ISO 26262 does not refer to papers that describe the recommended techniques, so the designer is left unguided as to the precise meaning of some terms, such as “appropriate scheduling properties”. When taking a product through ISO 26262 certification, the designer should either refer to the equivalent section of IEC 61508 or come to a clear understanding with the auditor on the precise meaning of each term early in the design process.

At this point that we need to distinguish the two reasons why the designer may wish to apply a particular tool or technique. The first is to convince herself or himself (and company management) that the system being developed is safe. Shipping an unsafe system can, of course, inflict immense damage on humans, on the environment, and on a company’s reputation. No professional designer would allow such a system to be delivered unless she or he is confident that it meets its requirements.

The analysis demonstrating that the system does indeed meet its safety requirements is contained in the Safety Case; see below. The tables in part 6 of ISO 26262 can provide a useful check-list to confirm that important things have not been overlooked.

Secondly, the designer may wish to apply a particular tool or technique to convince an auditor that the requirements of ISO 26262 have been met. It is here that the tables in part 6 become the law rather than useful guidance. As part 6 states: “If methods other than those listed are to be applied, a rationale shall be given that these fulfil the corresponding requirement.” Thus, if any recommended technique is not applied, the designer has to justify both the omission and the technique being used instead. Given that there are 16 tables, each with multiple entries, this justification can entail a significant amount of work.

Here are a few of the techniques highly recommended for an ASIL C application.

  • The use of defensive implementation techniques. This is an example of an open-ended recommendation where some form of justification will probably be needed for any system. For example, is it adequate to check some of the parameters passed to externally facing functions or is it necessary to check the array bound limits on every array access?
  • The use of semi-formal notations for the architectural design. Strangely, the use of formal (purely mathematical) notations that provide a greater degree of confidence is recommended, but not highly recommended.
  • The use of control- and data-flow analysis to verify the correctness of the design. Again, formal (mathematical) verification, while recommended, is not highly recommended at either ASIL C or D.
  • The use of control-flow monitoring for error and anomaly detection at runtime.
  • A limit of one entry and one exit point in subprograms and functions and no unconditional jumps. If, for reasons of expediency or efficiency, these recommendations are broken, such exception will need to be identified and justified.
  • Advanced static analysis tools such as Coccinelle may be needed to identify all the places where these recommendations have not been followed. Of course, that raises the problem of justifying that Coccinelle has found all the occurrences; see the “Tools” section below.
  • The use of back-to-back comparison testing between model and code to perform module testing.
  • The identification of equivalence classes to produce module test cases. This is one of the several mechanisms that ISO 26262 recommends for generating modules tests; the purpose is to avoid ad hoc methods and to make test case generation somewhat methodical. ISO 26262 expects 100% branch coverage from the module testing and recommends, but does not highly recommend, MC/DC coverage.
  • The use of requirements-based testing during integration. Interestingly, ISO 26262 does not recommend risk-based testing, unlike the more recent standard IEC 29119, Software and systems engineering — Software testing, which is based on this approach. Because IEC 29119 is more recent, it might be acceptable to argue that requirements-based testing has not been carried out because it has been superseded by risk-based testing, as justified in IEC 29119 — another topic to discuss early in the project with the auditor.


Many of the techniques highlighted in the previous section rely on tools for their application. For example, ISO 26262 highly recommends static code analysis at ASILs B to D, which raises the question of how much we can rely on the static code analysis tool we decide to use.

For this reason, ISO 26262 requires a Tool Confidence Level (TCL) to be assigned to every tool: editors, compilers, linkers, modelling tools, etc. According to part 8 of ISO 26262, a TCL comprises two parts:

  1. The impact that an erroneous tool could have on the final product. An erroneous compiler, for example, could have a significant impact on the final product were it to produce bad code silently, whereas a module test generator that did not produce all the module tests might give the developer a false sense of security but it would not directly influence the software in the final system. A Tool Impact (TI) level of TI1 or TI2 must be allocated. In this case the compiler would be allocated TI1 and the module test generator TI2.
  2. The probability that the error in the tool would be noticed. This results in a Tool Detection (TD) value of TD1 (problem would almost certainly be detected), TD2 (problem would probably be detected), or TD3 (problem would probably not be detected).

The TI and TD values are combined into a TCL value of 1, 2, or 3; this value determines the level of verification required.

Identifying and classifying all the tools involved in the chain from designer’s brain to shipped product can be a significant job; in the certification of QNX Software System’s operating system, twenty-one tools were identified. Once the tools have been classified, the development team must produce evidence to justify their use — an onerous but necessary task.

Some tools, such as static code analysis programs and module test case execution systems, can be bought pre-certified for use in ISO 26262 systems up to a particular ASIL. This approach reduces the amount of work required by the user of the tool, even though the user still bears the responsibility for the tool’s correct operation. Nonetheless, a tool certification is a good piece of evidence to demonstrate to an auditor that the tool is to be trusted.

The inability to justify the integrity of a tool may lead to project-level decisions. It is, for example, difficult to see how the claim of the integrity of a C++ compiler could be demonstrated unless the coding standards defined for use by the development team severely limit the C++ constructs that developers could use. This difficulty might lead to a decision not to use C++ in a particular project.

The Safety Case

As mentioned above, the designer needs to be convinced that the design is sufficiently safe, and must present this evidence to an auditor if the system is to receive ISO 26262 certification. The document that captures this argument is the Safety Case. Part 2 of ISO 26262 describes the need for a Safety Case and part 10 describes its three main components:

  1. A clear statement of what is claimed about the system. This normally takes the form: “We claim that this system, when used in the manner defined in the Safety Manual, will do X at least Y% of the time when faced with any of the following conditions: A, B, …”. This statement is essential because no system will be safe under all conditions of environment and use and the claim must make it clear under what conditions the system is expected to behave safely.
  2. The argument that the claim has been met. Various notations can represent this argument. For its operating system approval QNX Software Systems used the Goal-Structuring Notation (GSN) and a Bayesian Network. This is an example of the bifurcation discussed above: the Bayesian Network was needed to convince the designer, but the auditor preferred the GSN and so this was added.
  3. The evidence that supports the argument. Initially the argument will have no associated evidence and it is useful to agree upon the argument structure with the auditor before evidence is gathered: “If we were to produce the evidence supporting this argument, would the argument convince you? ” This approach allows the auditor to become familiar with the argument that will eventually be presented and to point out weaknesses in it before the formal audit.

Both GSN and Bayesian representations are tree-based and each leaf resolves to a piece of evidence: a document, a test result, a formal analysis, etc. Listing these against each subclaim in the argument makes it easier to check that all the necessary evidence has been identified and speeds up the audit: “You agreed the structure of the argument previously, so all we need to do is check that each piece of evidence really does support the associated subclaim.”

Figure 1 — A GSN diagram: The structure of the argument in the Safety Case.

Confidence from Use

ISO 26262 uses the term “proven in use” for an argument of the form: “We have had X of these devices in the field for Y hours and only had Z failures, and we therefore claim that the device meets the requirements of ASIL B.” As no proof is involved, I prefer the term “confidence from use” or “confidence through use” and this is the term that ISO 26262 uses for tools.

Part 8 of ISO 26262 provides some standard statistical formulæ to quantify the number of hours of use needed to support a confidence from use argument:

where: t is the necessary number of hours, CL is the required confidence level (ISO 26262 suggests 0.7), is the mean observed time to failure, f is the number of safety-impacting incidents, and 

  is the chi-squared distribution with error probability a and n degrees of freedom. For example, for an ASIL C system, the requirement is that t>1.2 x 10^8. Note that 1.2 x 10^8 hours is approximately 13,700 years.


A Structured Approach

ISO 26262 certainly adds some overhead to the development process that is normally applied to software on which system safety depends. In particular, it requires a more structured approach that would perhaps otherwise be applied.

The standard mandates three fundamental documents on which all the others are built:

1. The Hazard and Risk Analysis that identifies the safety requirements and residual risks of the system. This must be created early and updated as the project progresses.

2. The Safety Manual that accompanies the product to the field and sets the constraints within which the product must be deployed.

3. The Safety Case that presents the claim made about the product and the argument for believing that the claim is justified. This is built throughout the project development; it would be very difficult to recreate the evidence retrospectively.

The development team must also adopt the tools and techniques (highly) recommended within ISO 26262 or present a good argument as to why they have not been adopted. That said, any team working on a safety-critical development project would probably apply most of the recommended techniques, even without the stimulus of ISO 26262.

It would be good to think that, once ISO 26262 systems are ubiquitous in cars, there will be no more Mary Wards. In reality, no specification is perfect, no design is perfect, no implementation is perfect, no audit is perfect, and no system can ever be 100% safe. But the application of the techniques described in ISO 26262 can move most development projects in the direction of increased safety.

Some have criticised ISO 26262 because, if it is mis-applied, it can become a checklist rather than a source of useful techniques whose applicability needs to be assessed for each project. Effectively a system can be designed to meet ISO 26262 rather than to be safe. But it is our role as professional designers to avoid this pitfall.

About the author:

Chris Hobbs is Senior Developer, Safe Systems at QNX Software Systems.

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News


Linked Articles