
Apply Standard Guidelines for Automotive Safety and Security
A software problem in the Autopilot of an electric Tesla S automobile has resulted in the first fatality attributed to a driverless vehicle. Apparently the public beta version of the technology failed to distinguish between an oncoming white semi that made a left turn across the road and open space. The car continued on course and the driver was killed instantly. Like nothing else, this tragic incident shows how critical software has become in the modern automobile.
While the driverless car is obviously not yet perfected, today’s automobiles contain vast amounts of software, much of which is already critical to safety and all of which must work reliably. Software now controls everything from engine elements to driver alerts for staying in the lane, passing, and safe distance following, to opening and closing windows, to entertainment systems.
Software also controls outside communication such as Bluetooth for hands-free telephone and connecting to the Internet. Any Internet connection raises serious security concerns, especially when it can grant access to vital systems in a moving car. For instance, in 2012, the GM OnStar system built into its vehicles was used at the request of police to shut down a stolen car as it was being pursued. If the police can break into a car, so can thieves and other hackers. In order to assure safety and reliability, it has become clear that automotive code must also be secure from outside tampering.
Of course, the automobile industry is huge, with many manufacturers of vehicles and the components and subsystems that are built into cars, many of which are supplied by third-party developers. As more subsystems that formerly contained no software—such as rolling the windows up and down—are now controlled by embedded code, a new paradox has arisen. These third-party and peripheral devices must have standards for a common interface so that they can communicate within the vehicle. At the same time, they must be secure from outside tampering. This requires a comprehensive set of development tools that can establish a discipline for cooperating teams from the very start and apply to all phases of development. Reliability, safety, and security are interdependent and cannot be afterthoughts; they must be the foundation as well as the structure of the product.
A Framework of Standards Guidelines
Fortunately, the automotive industry has access to extensive software standards. One of these is ISO 26262, which provides detailed industry-specific guidelines for the production of software for automotive systems and equipment, whether it is safety critical or not. The standard provides a risk-management approach including the determination of risk classes, or Automotive Safety Integrity Levels (ASILs). ISO 26262 establishes the use of development processes, coding standards, and tools to improve software quality. MISRA C, a coding standard developed by the Motor Industry Software Reliability Association, is ideal for this purpose. MISRA defines a subset of the C language suitable for developing any application with high-integrity or high-reliability requirements. The standard has been updated over the years to reflect new requirements, including the recent MISRA C:2012 Amendment 1 that contains 14 new rules and guidelines for implementing security. The amendment is an enhancement to, and is fully compatible with, all existing editions of the MISRA language guidelines and becomes the standard approach for all future editions of the MISRA guidelines. (See sidebar: “MISRA Software Guidelines Extended to Improve Security.”)
Together, ISO 26262 and MISRA form a basis for applying a comprehensive set of tools and processes for the development of reliable, safe, and secure software systems in the automotive industry. A “V” diagram of how such a set could be applied is shown in Figure 1 and is, in fact, reflective of the process outlined by ISO 26262 for ensuring reliability and safety.
Implementing secure code can follow a number of approaches, such as following the guidelines for cybersecurity published by the National Institute of Standards and Technology (NIST), selecting a real-time operating system that incorporates secure protocols, creating layers of password protection, physical identification, and more. Incorporating the MISRA C security guidelines provides additional guards against coding errors and innocent mistakes that can compromise security.
The heart of developing secure code is to devise a strategy, which can be based on security guidelines as well as the coding standard along with an organization’s own methods, and build them into the requirements document of the system under development. The next big step, of course, is to make sure that this strategy has actually been carried out effectively. Due to the size and complexity of today’s software this can no longer be done manually but must make use of a comprehensive set of tools that can thoroughly analyze the code both before and after compilation and throughout the course of the project.
Use Traceability and Analysis to Verify Safety and Security
A requirements management tool lets teams work on individual activities and link code and verification artifacts back to higher-level objectives. Three major elements under the supervision of bi-directional requirements traceability are applied early on and at successive stages of the development process. These are static analysis, dynamic analysis for functional test and structural coverage analysis, and unit testing/integration testing. The latter applies both static and dynamic analysis early in the development process and is applied to the later-integrated code as well (Figure 2).

is essential for dynamic testing and coverage analysis and can greatly help
in identifying areas of dead code.
Static and Dynamic Analysis Partner for Security
In assuring security, the two main concerns are data and control. Questions that must be considered include: “Who has access to what data? Who can read from it and who can write to it? What is the flow of data to and from what entities and how does access affect control?” Here the “who” can refer to persons like developers and operators—and hackers—and it can also refer to software components either within the application or residing somewhere in a networked architecture. To address these issues, static and dynamic analysis must go hand in hand.
On the static analysis side, the tools work with the uncompiled source code to check the code against the selected rules, which can be any combination of the supported standards as well as any custom rules and requirements that the developer or a company may specify. The tools also look for software constructs that can compromise security as well as check memory protection to determine who has access to which memory and to trace pointers that may traverse a memory location. Static analysis can also be used to verify compliance of the coding to the MISRA C standard. This is done automatically by running the analysis tool with the coding standard definition against the application’s source code. Results should ideally be presented in graphical screen displays for easy assessment to assure coding standards compliance (Figure 3).
It is also important to perform this analysis on any imported “software of unknown pedigree” (SOUP) code that may be brought in from earlier projects or acquired as open source. Such code will almost certainly need to be modified to conform to the latest security requirements of MISRA C.
Dynamic analysis, on the other hand, tests the compiled code, which is linked back to the source code. Dynamic analysis, especially code coverage analysis, requires extensive testing. Often developers can manually generate and manage their own test cases. This is the typical method of generating test cases—working from the requirements document. And they may stimulate and monitor sections of the application with varying degrees of effectiveness. Given the size and complexity of today’s code, however, that will not be enough to achieve confidence that the code is correct or to obtain any certifications that may be required.
Automatic test generation is based on the static analysis of the code. The information provided by static analysis helps the automatic test generator create the proper stimuli to the software components in the application during dynamic analysis. Functional tests can be created manually using the requirements document. These should include any functional security tests such as simulated attempts to access control of a device or feed it with incorrect data that would change its mission. Functional testing based on created tests should include robustness, such as testing for results of unallowed inputs and anomalous conditions. Dynamic analysis also provides for code coverage and data flow/control analysis, which in turn can be checked for completeness using bi-directional requirements traceability.
In addition to testing for compliance to standards and requirements, is also necessary to examine any segments of SOUP code. For example, there is danger associated with areas of “dead” code that could be activated by a hacker or some obscure event in the system for malicious purposes. Although it is ideal to start implementing security from the ground up, most projects include pre-existing code, or SOUP, that may have what looks like just the needed functionality. Such code—even code from the same organization— must be subjected to exactly the same rigorous analysis used in the current project. Even if it is correctly written, such SOUP may contain segments that are simply not needed by the application under development. Used together, static and dynamic analysis can reveal areas of dead code, which can be a source of danger or may just inconveniently take up space. It is necessary to properly identify such code and deal with it—usually by eliminating it.
Start with Unit Testing and Apply it at all Stages of the Project
As units are developed, tested, modified, and retested by teams of developers—possibly in entirely different locations—the test records generated from a comprehensive tool suite can be stored and shared and used as units are integrated into the larger project. This is important from the start because thinking about and developing for security from the ground up doesn’t help much unless you can also test from the ground up—and that includes testing on a host development system before target hardware is available. At this stage, nobody is talking about the project nearing completion, so it must be possible to do early unit testing and then integration testing as assignments come together from different teams of developers.
Unit testing is needed for the same reasons mentioned in case any of it is SOUP, but there may also be less-sinister reasons. For example, it may be necessary to change the names of certain variables to comply with those used in the larger project. All these are possible avenues for introducing errors. The decision to use unit test tools comes down to a commercial decision. The later a defect is found in the product development, the more costly it is to fix.
Applying a comprehensive suite of test and analysis tools to the development process of an organization greatly improves the thoroughness and accuracy of security measures to protect vital systems. It also smooths what can often be a painful effort for teams to work together for common goals and to have confidence in their final product. That product will also stand a much better chance for customer approval and, if needed, certification by authorities.
MISRA Software Guidelines Extended to Improve Security
The MISRA guidelines aim to facilitate code safety, security, portability, and reliability in the context of embedded systems. Recently, the guidelines have been extended with Amendment 1, which establishes 14 new guidelines for secure C coding to improve the coverage of the security concerns highlighted by the recently published ISO/IEC 17961:2013 C language Security Guidelines. The amendment comprises both broad directives as well as specific rules. By following these additional guidelines, developers can more thoroughly analyze their code and can assure regulatory authorities that they followed safe and secure coding practices.
The following are two examples of rules:
Example 1
The validity of values received from external sources shall be checked. (Dir. 4.14)
This controls what data can be exposed to external sources. It was the neglect of this rule that was the cause of the infamous “heartbleed” malfunction where a seemingly minor error failed to verify the length of a specific piece of data and allowed hackers access in the “heartbeat” of the SSL exchange that takes place prior to encryption and normal data exchange.
The following sample code is dangerous because the length of a message received from an external source is not validated. Taking this to the logical extreme—by not validating the length of data to send back—an attacker could extract the entire contents of a computer’s memory remotely. In practice, variations of this attack have been used to extract cleartext passwords from network servers’ internal memory buffers. Recently, it was used to download large volumes of customer financial information, but it could be used to compromise almost any type of system.
Exploits such as this make it possible to retrieve the encryption keys and passwords that allow different parts of the vehicle to communicate together. In autonomous vehicles the systems are often interconnected, and by compromising just one, for instance a sensor or a GPS, you can compromise the whole vehicle – literally tell it that it is some place it is not, and thus trigger logic that should not be triggered under normal operation. The possibility of fatality in these kinds of scenarios is great.
extern uint8_t buffer[ 16 ];
/* pMessage points to an external message that is to be copied to ‘buffer’.
* The first byte holds the length of the message.
*/
void processMessage ( uint8_t const *pMessage )
{
uint8_t length = *pMessage; /* Length not validated */
for ( uint8_t i = 0u; i < length; ++i )
{
++pMessage;
buffer[ i ] = *pMessage;
}
}
Example 2:
The Standard Library function memcmp shall not be used to compare null terminated strings. (Rule 21.14)
Memcmp is designed for comparing blocks of memory, and is not designed for comparing strings such as passwords. Because it returns more than just true and false, it can be used to reveal passwords in database systems. This low level system call is used in almost all data retrieval functionality. It does not need to be a formal database system – it is also applicable to the informal database systems that, for instance, contain information about different attributes of an automobile – such as how bright the headlights are – or whether the left tire or the right tire has a greater inflation. Once these passwords are compromised, the data can potentially be changed – and importantly once any part of the vehicle is compromised – the rest of the systems cannot be trusted, as the vehicle itself is an interconnected system.
The following sample code uses memcmp to compare strings when strcmp should have been used.
extern char buffer1[ 12 ]; extern char buffer2[ 12 ]; void f1 ( void ) { strcpy ( buffer1, "abc" ); strcpy ( buffer2, "abc" ); if ( memcmp ( buffer1, buffer2, sizeof ( buffer1 ) ) != 0 ) { /* The strings stored in buffer1 and buffer 2 are reported to be * different, but this may actually be due to differences in the * uninitialised characters stored after the null terminators. */ } }
The MISRA guidelines are such that they can be checked using static source code analysis.
About the author:
Jay Thomas, Director of Field Engineering for LDRA Technology in San Bruno, California, has worked on embedded controls simulation, processor simulation, mission- and safety-critical flight software, and communications applications in the aerospace industry. His focus on embedded verification implementation ensures that LDRA clients in aerospace, medical, and industrial sectors are well grounded in safety-, mission-, and security-critical processes.
