MENU

Continental powers up supercomputer for AI training

Continental powers up supercomputer for AI training

Partners |
By Nick Flaherty



Continental’s supercomputer is built with more than 50 Nvidia DGX systems, connected with the Nvidia Mellanox InfiniBand network. It is ranked according to the publicly available list of TOP500 supercomputers as the top system in the automotive industry. A hybrid approach has been chosen to be able to extend capacity and storage through cloud solutions if needed.

The main task for the new supercomputer is AI training, Deep learning, simulation and virtual data generation. Advanced driver assistance systems use AI to make decisions, assist the driver and ultimately operate autonomously. Environmental sensors like radar and cameras deliver raw data. This raw data is being processed in real-time by intelligent systems to create a comprehensive model of the vehicle’s surroundings and devise a strategy on how to interact with the environment. Finally, the vehicle needs to be controlled to behave like planned. But with systems becoming more and more complex, traditional software development methods and machine learning methods have reached their limit. Deep Learning and simulations have become fundamental methods in the development of AI-based solutions.

With Deep Learning, an artificial neural network enables the machine to learn by experience and connect new information with existing knowledge, essentially imitating the learning process within the human brain. But while a child is capable of recognizing a car after being shown a few dozen pictures of different car types, several thousand hours of training with millions of images and therefore enormous amounts of data are necessary to train a neural network that will later on assist a driver or even operate a vehicle autonomously. The new supercomputer not only reduces the time needed for this complex process, it also reduces the time to market for new technologies.

The Nvidia supercomputer, a machine with an architecture specifically optimized for AI applications, helps the users to slash development time significantly; Continental talks of hours instead of weeks. “The system reduces the time to train neural networks, as it allows for at least 14 times more experiments to be run at the same time,” explains Christian Schumacher, head of Program Management Systems in Continental’s Advanced Driver Assistance Systems (ADAS) business unit.


To date, the data used for training those neural networks comes mainly from the Continental real-world test vehicle fleet. Currently, they drive around 15,000 test kilometers each day, collecting around 100 terabytes of data, an equivalent to 50,000 hours of movies. Already, the recorded data can be used to train new systems by being replayed and thus simulating physical test drives. With the supercomputer, data can now be generated synthetically, a highly computing power consuming use case that allows systems to learn from travelling virtually through a simulated environment.

This can have several advantages for the development process: Firstly, over the long run, it might make recording, storing and mining the data generated by the physical fleet unnecessary, as necessary training scenarios can be created instantly on the system itself. Secondly, it increases speed, as virtual vehicles can travel the same number of test kilometers in a few hours that would take a real car several weeks. Thirdly, the synthetic generation of data makes it possible for systems to process and react to changing and unpredictable situations. Ultimately, this will allow vehicles to navigate safely through changing and extreme weather conditions or make reliable forecasts of pedestrian movements – thus paving the way to higher levels of automation.

The ability to scale was one of the main drivers behind the conception of the Nvidia DGX. Through technology, machines can learn faster, better and more comprehensively than through any human-controlled method, with potential performance growing exponentially with every evolutionary step.

The supercomputer is located in a datacenter in Frankfurt, which has been chosen for its proximity to cloud providers and, more importantly, its AI-ready environment, fulfilling specific requirements regarding cooling systems, connectivity and power supply. Certified green energy is being used to power the computer, with GPU clusters being much more energy efficient than CPU clusters by design.

Related articles:

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News

Share:

Linked Articles
10s