
Virtualization — A FACE lift for vehicle control
Almost everyone that is asked will regard 2021 as a disruptive year. The reality is that for the automotive industry, disruption has been going on for some time, with the primary ones being:
- A shift from human control towards autonomous platforms
- A shift from petrol and diesel driven engines over to electric
- A shift from value being added in hardware to value being added through software
In addition to this there are all sorts of disruption in the supply chain from political influences such as the tensions between China and America and the increased rate of acquisitions across the software and hardware industry. It is hard to predict which of the myriad of semiconductor startups focused on silicon supporting artificial intelligence will survive and indeed thrive.
In the face of this highly creative, but chaotic environment, how is the automotive industry to retain control of its value chain? Some companies such as Tesla have started to create their own system-on-chip components in a bid to exercise some form of increased control over their value chain and I expect a few others to follow suit. Not everyone can afford the investment. In this article, I will look at the lessons the automotive industry can learn from developers of military systems and go on to look at technologies like virtualization that can greatly simplify system design.
The value of a platform approach
The reality is that creating system-on-chip components is an incredibly expensive investment and needs capabilities that most automotive manufacturers lack. Outsourcing chip development to a specialist design house can help but the reality is that the automotive OEM must know enough to define the system architecture comprehensively that is compelling for the here and now and as well as for the useful life of the vehicle; and drive enough volume to justify the expense. Many companies that are creating lower volume products such as autonomous ride sharing platforms (busses) and trucks will need to harness off-the-shelf platforms so the priority becomes creating a strategy so that the company isn’t tied into a specific vendor. Toyota’s recent adoption of apex.os is, I believe, partly based on a desire to migrate away from proprietary technology towards a product whose heritage is open source (Robot operating systems, ROS). Abstracting this to higher levels, the clear desire from customers is to support a platform that has a consistent, well defined and supported API that enables the programmer to create applications without any need to worry about low level system minutia.
The automotive industry as starting a journey that developers of military systems have been on for some time – and that they can usefully draw on some of the lessons learned before moving on with solutions addressing the specific challenges that they face.
The lessons of FACE
At one time, military systems were based upon proprietary applications, middleware, operating systems and/or hardware. This resulted in problems that included long lead times, high costs, and few opportunities to reuse existing technologies. Putting system modifications out to competitive tender was impossible because the only suppliers equipped to make changes were the suppliers of the original system. The FACE Consortium, a partnership between industry suppliers, government experts, academia, and customers, was formed to address those issues. Standardizing approaches for using open standards within military avionics systems promised to lower implementation costs, accelerate development, ensure robust architecture and consistently high-quality software implementation, and maximize opportunities for reuse. The US military industry has used FACE™ (Future Airborne Capability Environment) to drive mandatory conformance requirements for mission critical system software for nearly every applicable military program.
Although FACE informs and guides all aspects of software design for tactical mission systems (communications, flight control, flight map and planning, cockpit displays etc.), the world of vehicle control harbors reservations about FACE adoption. Their imperative to deliver safety critical, hard real-time control systems has raised concerns about technical feasibility impeded by the complexities inherent to the FACE multicore Operating System Segment (OSS).
Our recent experience in working with vehicle control projects, particularly those based on multicore processors, has proven the use of CPU virtualization as a powerful tool that compliments operating systems in resolving integration conflicts between software components with platform requirements that differ greatly in terms of API compatibility and architectural assumptions.
Figure 1 — Typical low-level architecture of existing automotive platform.
FACE views virtualization as primarily a hardware consolidation tool. But as the world presses forward in the development of unmanned vehicles, the need to integrate vehicle control and mission system computing will become mandatory and the concerns will become more pertinent. Given its capacity to deliver on core FACE principles where hard real-time control is essential, virtualization deserves further consideration.
Virtualization in embedded systems
Many of the benefits of multicore virtualization in embedded systems are well documented. The ability to consolidate multiple legacy Single Board Computers (SBCs) with various operating systems and applications into a single, multicore, virtualized SBC is widely acknowledged to be the most tangible benefit to next generation platforms. But the capacity of CPU virtualization and hypervisors to provide benefits relating to real-time performance, software composability, and architectural robustness are less known to the veteran embedded software community. The following sections discuss these benefits as applied within the context of the FACE reference architecture:
- Dedicated partitioning segment
- Simplifying space for real-time
- Increasing portability and reuse
Dedicated partitioning segment
Over the last ten years, there have been considerable advances in the ability to run multiple operating systems on a single processor through CPU virtualization. While popularized and universally adopted in the IT world of computing, the underlying hardware has features that are just as appealing to embedded engineers concerned with robustness and predictability.
In traditional platform software design, each processor hosts a single OS kernel that is responsible for managing memory allocation, execution scheduling, interrupt routing, exception handling, peripheral control, and bus multiplexing. But now, virtualization enabled multicore hardware is now capable of accommodating many kernels, with each kernel allocated subsets of resources of varying types and sizes. Multiple independent software runtimes can therefore be implemented on a single device without the interference of a common kernel and consequentially common mode failure hazards. Such capabilities enhance fundamental architectural properties relating to safety and security concerns.
From a security perspective, the use of built-in CPU virtualization features to isolate hardware security functions, and separate application runtime services from hardware control interfaces, goes a long was in providing assurance of system robustness. Such design techniques eliminate commonly exploited threat vectors that result in security policy bypass, privilege escalation, and loss of CPU control all together.
From a safety perspective, upholding threshold design principles—predictability, integrity, and high availability etc—is greatly aided using virtualization partitioning features such as:
- DMA channel isolation
- Shared last level cache partitioning
- Memory bus bandwidth allocation
- Independent interrupt, event, and exception handling
The ability for software partitions to be fortified and controlled with greater fidelity at hardware levels aligns perfectly with FACE ideals. The diagram below (Figure 2) introduces the notion of a “Hardware Partitioning Segment” fulfilled by a hypervisor to the FACE reference architecture. The depiction shows a hypervisor isolating two sets of software on two different CPU cores. Each set is configured with FACE conformant components. Each set of software is granted greater partitioning properties over a single OS hosted design where the device drivers and internal service are separated.
Figure 2— Example FACE configuration with CPU virtualization assisted hardware partitioning segment.
Simplifying Space for Real-time
Adding yet another segment into FACE would be a significant undertaking. Introducing another class of technology and layer of software beneath an OS may seem counterproductive to real-time and safety conscious developers wary of complexity. But the hardware partitioning and multiplexing capabilities presented by CPU virtualization presents the opportunity to encapsulate and map subsets of runtime features for critical tasks on a processor that is simultaneously hosting applications with inherently richer runtime and service dependencies.
Figure 3— Example FACE configuration with independent real-time partition.
For example, suppose a vehicle control health monitor application, such as a high frequency majority voting CBIT (Continuous Built-In Test) needed for TMR (Triple Modular Redundancy) error detection must run alongside a flight planning application on a multicore processor. Using a hypervisor-based solution rather than implementing both applications concurrently on the same OS sharing the same network stack and kernel; the health monitor application (shown in Figure 3) can be:
- Mapped to a separate CPU core
- Mapped to a separate Ethernet MAC
- Run according to an independent thread scheduling algorithm
- Isolated from orthogonal interrupts and blocking semaphores
- Isolated from DMA and OS kernel memory access errors
- Run on an optimized, minimalistic POSIX compliant runtime environment
The result is an ideal scenario for a real-time programmer looking to simplify analysis of worst-case execution time (WCET). Yet at the LRU (Line Replaceable Unit) level, the platform retains the ability to host applications with richer TSS (Transport Services Segment) and OSS (Operating System Segment) capability requirements less concerned about timing and integrity hazards.
Portability and reuse
Military programs are often stuck with Board-Support Package (BSP) Non-Recurring Engineering (NRE) costs that could be avoided if internal platform software were more portable. Low-level code modules (particularly drivers) are notoriously problematic in providing valued properties of reuse and interoperability.
Standardizing OS internal kernel interfaces is not practical due to their unique design and (in many cases) proprietary nature. However, several classes of device drivers that are naturally independent from core services and require minimal OS feature support (such as the file system) can be isolated by a hypervisor and integrated with applications over standard Inter-process Communication (IPC) interfaces.
Figure 4— Example of stand-alone Units of Conformance.
It is demonstrable that devices can be controlled independently from operating systems and integrated with other components without embedding proprietary OS dependencies. Consider an OpenGL UA application that simply needs drivers with access to the GPU device interface. Or consider a self-contained MIL-STD-1553 Service with TSS compatible I/O interfaces made available to PCS (Portable Component Segment) applications (See Figure 4).
Instead of relying on OS implementations of resource mapping and IPC transports, the TSS layer and local application runtime software can have sufficient capabilities to locate dependant modules and integrate with the use of standard hypervisor provided interfaces and services. Such an approach can even follow the FACE Unit of Conformance packaging requirements. This vision is not farfetched given virtualization standards such as OASIS “VIRTIO” already exists and is well established. Just as FACE relies on POSIX® to uphold standard specifications for the OSS, VIRTIO can similarly support the proposed Hardware Partitioning Segment.
I mentioned disruption earlier in the article. One of the other items on the horizon is the creation of urban mobility platforms that deliver human passengers or cargo through the air. We are of the belief that the likely hardware architecture winner in this transition will be automotive platforms (as opposed to cutting down existing avionics’ platforms). From a system perspective, there will clearly be a need to provide a path to relevant system certification standards including DO-178. The adoption of FACE, around which companies have created certified and certifiable software stacks, will offer an accelerated time to deployment for this new class of autonomous craft.
Conclusion
FACE is a resounding success. But to date, the portability and interoperability benefits of FACE have been generally limited to the mission system software hosted by operating systems above the TSS layer.
Exacerbating that situation, the targeting of military avionics towards the development of unmanned systems is likely to see the underlying boundaries of mission system vs vehicle control computing domains diminish, and the limitations of FACE becoming more of an irritation.
To fulfil the needs of the automotive industry, the strengths of FACE must be recognised whilst at the same time adapting the environment to accommodate the needs of vehicle control software. Recent experience on vehicle control subsystems has proven virtualization as a means to reduce platform software complexity carving out low level hardware control access while providing the well appreciated architectural benefits of partitioning and interoperability interfaces.
About the author
In his role as chief technical officer, Will Keegan leads the technology direction across all the Lynx product lines. He has been instrumental in the development of key security technologies within Lynx to broaden the reach of the existing products, with a focus on cyber-security, cryptography and virtualization. Will holds a BSc in Computer Science from University of Texas.
