MENU

The new generation of “software-defined vehicles”

The new generation of “software-defined vehicles”

Technology News |
By Christoph Hammerschmidt



First of all, it should be noted that the Software-defined Vehicle cannot be considered in isolation, because it communicates in many ways and is therefore part of a larger and more complex system, it is part of a “system-of-systems”. On the one hand, vehicles communicate with each other, so-called vehicle-2-vehicle communication takes place. This exchange of information is particularly fast and is suitable, for example, at close range to warn of an accident or the end of a traffic jam that is not within view.

Vehicles also communicate with infrastructure located along the roadway, such as intelligent traffic light systems at intersections or roadside MEC (Multi-access Edge Computing) servers operated by telecommunications companies as part of the network. This is referred to as “Vehicle-2-Infrastructure” communication.

However, a large part of the data exchange takes place between the vehicle and a “connected vehicle platform”, a back-end system of the manufacturer.  This platform can be operated in the OEM’s data center, in a private cloud or in the public cloud. These backend systems must be able to send and receive messages to many vehicles in parallel. If we think about autonomous driving, the delays associated with this must be kept to a minimum, i.e. these backend systems must have very low latency times.

Figure 1, IBM Technology for the Software-defined vehicle

Vehicle data must then be analyzed using analytics and AI and connected to other data sources to provide innovative OEM-specific use cases. As these backend systems are usually designed for specific regions, they should be easily rolled out to new regions and markets; they should be able to serve an increasing number of vehicles over time; and they should also scale according to the increasing bandwidth and thus the increasing amount of data exchanged.

Updates “over-the-air”

The increasing bandwidth, which will grow significantly with the spread of 5G, makes it possible to carry out even major software updates over the air interface instead of taking the vehicle to the authorized workshop, as it has been the case in the past. This enables OEMs to keep the vehicle “up-to-date” over its lifetime. Some OEMs are also considering offering additional functions and services this way in order to generate further revenue. However, over-the-air (OTA) updates can also be used to close security gaps and correct software errors.

Closing security gaps to defend against hacker attacks is a serious issue that is now also mandatory under UNECE regulations.

UNECE Regulation R155

UNECE Recommendation R155 [1] on “Vehicle Security” sets out a number of very comprehensive requirements that are now being implemented by OEMs worldwide. These include the implementation and maintenance of a Cyber Security Management System (CSMS) that covers the entire vehicle lifecycle – development, production, distribution, delivery, maintenance/repair in workshops up to the point of scrapping. ISO/SAE 21434 (“Road Vehicles – Cyber Security Engineering”) [2] regulates technical details in this regard. Furthermore, the establishment of a software update management system (SUMS) is mandatory. Another element is the monitoring of vehicles in operation: It must be capable of detecting hacker attacks and initiating countermeasures by means of Security Operation Centers (SOCs) for vehicles.

Many manufacturers began taking measures to protect their vehicles against hacker attacks years ago. Now they are also legally required to do so. The Software-defined vehicle must be protected against cyber attacks.

High Performance Compute Units – a Data Center on Wheels

We see very significant changes in the electronics/electrical (E/E) architecture. The Software-defined vehicle will no longer have up to 100 different electronic control units (ECUs). Instead, a few High Performance Compute (HPCs) will be installed. These will be surrounded by a few smaller ECUs that manage sensors and actuators.

HPCs are more similar to powerful computers in data centers than to vehicle-specific electronics; HPCs allow more compute- and resource-intensive operating systems and middleware, so that classic IT technology such as Linux and software containers become interesting for use in vehicles.

A new generation of vehicle software

Although Linux in vehicles is not entirely new – think GENIVI, now Connected Vehicle Systems Alliance (COVESA) [3] and Automotive Grade Linux AGL [4] – we are currently seeing a resurgence of interest in the topic of “Linux”. There are many reasons for this: OEMs complain about vendor lock-in, i.e. the dependence on certain software providers. Open source, and thus also Linux, prevents vendor lock-in and is no longer considered a “spectre”. Compared to proprietary operating systems, there are many trained software engineers with good Linux knowledge in the market. Security gaps as well as software errors are usually discovered faster in the open source area and eliminated more quickly. If one does not want to take care of the maintenance of open source oneself, there are service providers and product suppliers who “refine” open source and provide ready-made and easily consumable solutions.

Software containers can also be run on Linux. Containerization has been known in traditional IT for a long time and is widely used because it greatly simplifies deployment, but also increases software portability, reuse and maintainability.

For automotive OEMs, containers on Linux in the vehicle are so interesting because the same technology – Linux and containers – can also be found in the cloud and the OEMs’ data centers; the same applies to MECs and road side units. OEMs would thus have the same technology base – in the vehicle, in the infrastructure surrounding the vehicle and in the backend. Thus, one and the same software could be flexibly deployed as a container either into the car or into the road side unit or into the backend: A huge gain in flexibility and freedom of choice.

However, in order to be able to use Linux and containers in as many vehicle domains as possible, Linux must be optimally adapted to the special requirements in vehicle use. Software engineers complain about the insufficient real-time capability of Linux, which would be needed for many applications in the vehicle. In order to be able to use Linux in safety-critical applications, certification in accordance with the ISO 26262 standard [5] is lacking. Achieving this certification is not easy, since Linux was not developed with the claim to be safe in the sense of ISO 26262. Unfortunately, it is not enough to “test in” safety after the fact. And therefore qualification and certification are a major task. The first companies are taking on this task. Among others, the Linux market leader Red Hat recently announced that it intends to launch a continuously certified Linux for use in vehicles [6]. Red Hat is aiming to achieve ASIL-B. ASIL stands for Automotive Safety Integrity Level. This is a measure of criticality and provides a scheme for determining risk. There are levels A, B, C, and D, with A representing the least risk and D representing the highest risk.

Growing efforts for software integration

Fewer, but more powerful compute units also mean more effort for the integration of software: It makes a difference whether 500,000 lines of code are put on one ECU or whether 10 million lines of code are put on one HPC.  In the “old” E/E world, integration meant wiring about 100 electronic control units (ECUs). It was about connecting hardware boxes with defined communication interfaces via standard bus systems. Now there are tens of thousands of software modules that have to be integrated on an HPC.

Here, again, containerization brings advantages: Since the containerized software packages have all the necessary libraries and runtime environments with them in the container, they are relatively isolated and stable executable – ideal prerequisites for an easy integration.

Software development

There’s no question about it: For the new generation of Software-defined vehicles, even more software than before needs to be developed and “put on the road”.  For a long time, the industry has been complaining about the lack of well-trained software engineers. So it makes sense to do everything possible to increase the productivity of the existing team of developers. Agile software development has gained widespread acceptance because, among other things, it delivers results faster and finds errors earlier, which reduces development costs. However, these Agile methods are contrasted with the “traditional” V-model oriented ASPICE and ISO 26262 development models. To integrate both sensibly with each other is the order of the day. Last but not least, these software development approaches, the processes, methods and tools, need to be integrated with the working practices of the Data Scientists, the teams that develop the AI. AI development is all about the data. Many PetaBytes (PB) of data are needed in the particularly AI-intensive domain of autonomous driving. These PB of data must be collected, sifted, selected, and then processed so that they can be used for machine learning (ML), verification, and validation. This requires completely different processes, methods and tools. The three worlds of “Agile Software Development”, “V-model-based approaches” and “AI Engineering” must therefore be integrated in a meaningful way to ensure a smooth development process.

Lots and lots of data

As stated earlier, data science teams need lots of PB of data to develop AI for autonomous driving. Some of this data is gathered from normal operations. That means the vehicles already sold send data to the manufacturer’s backend, which is used to improve AI. Another part of the data is generated artificially: In virtual simulations, it is relatively easy to change weather conditions, create critical traffic situations, or have accidents happen. 

However, the majority of data for the development of autonomous driving comes from special test vehicles, of which each manufacturer and supplier operates a whole fleet, distributed around the world. These test vehicles are equipped with special electronics that record sensor data from camera, lidar and radar systems along with in-vehicle message communications.  This data is stored on cartridges that are removed at the end of the test drive. Only part of the recorded data is really valuable for the development of the AI, namely those sequences in which something new or unexpected happens, because this is the only way to improve the AI.

It is obvious that the review, selection and preprocessing of data is very time-consuming and requires a modern data management. PB of data can no longer simply be copied from A to B. For this reason, companies are using hybrid solutions in which, on the one hand, data is located in their own data center; on the other hand, data is held in the cloud or in different clouds. Modern data fabric approaches allow these distributed data to be accessed in a unified way. This allows data scientists and engineers to focus on their core development tasks instead of wasting time searching and copying data.

Together we are strong

No one can solve all these challenges – Big Data, AI development, new software in the vehicle, connectivity solutions, backend and OTA systems, safety and security – on their own. It takes many specialists from different industries and companies.

IBM is working with the entire industry to successfully address the challenges in Software-defined vehicle development and operations. Figure 1 shows the current IBM focus areas.

A nice showcase project is the setup of a supercomputer at Continental AG, which is used for the development of autonomous driving [7]. Continental worked with Nvidia, who contributed their Nvidia DGX systems to the solution for AI training. The solution runs in data centers owned by Equinix, a global colocation infrastructure provider. IBM Business Partner SVA System Vertrieb Alexander GmbH integrated and technically implemented the solution. IBM provided the storage technology, IBM Spectrum Scale and IBM Elastic Storage System, which now allows Continental to perform up to 14 times more AI experiments in the same time. This example shows how important collaboration is for success.

Literature

[1] United Nations: Addendum 154 – UN Regulation No. 155, Uniform provisions concerning the approval of vehicles with regards to cyber security and cyber security management system, January 2021, Online: https://unece.org/sites/default/files/2021-03/R155e.pdf  4. Dezember 2021

[2] International Organization for Standardization: ISO/SAE 21434:2021 Road vehicles — Cybersecurity engineering, https://www.iso.org/

[3] Connected Vehicle Systems Alliance (COVESA): About COVESA, Online:  https://www.covesa.global/about-covesa, 18.Dezember 2021

[4] Automotive Grade Linux: About Automotive Grade Linux (AGL), Online: https://www.automotivelinux.org/about/, 18.Dezember 2021

[5] International Organization for Standardization: ISO 26262:2018 Road vehicles — Functional safety, Part 1 – Part 12, https://www.iso.org/

[6] Red Hat, Inc.: Red Hat Sets Sights on Delivering the First Continuously Certified Linux Platform for Road Vehicles, Online: https://www.redhat.com/de/about/press-releases/red-hat-sets-sights-delivering-first-continuously-certified-linux-platform-road-vehicles,  8.Dezember 2021

[7] Cotton, L. International Business Machines: Accelerating insight into vehicle safety at Continental, Online: https://www.ibm.com/case-studies/continental-automotive/,  9. Dezember 2021

 

About the author:

Hans Windpassinger is IBM Solution Leader, Global Automotive, Aerospace & Defense Industries ind Munich, Germany. He is responsible for IBM solutions for the development and operation of “Software Defined Vehicles”.

 

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News

Share:

Linked Articles
10s