To date, the data used for training those neural networks comes mainly from the Continental real-world test vehicle fleet. Currently, they drive around 15,000 test kilometers each day, collecting around 100 terabytes of data, an equivalent to 50,000 hours of movies. Already, the recorded data can be used to train new systems by being replayed and thus simulating physical test drives. With the supercomputer, data can now be generated synthetically, a highly computing power consuming use case that allows systems to learn from travelling virtually through a simulated environment.
This can have several advantages for the development process: Firstly, over the long run, it might make recording, storing and mining the data generated by the physical fleet unnecessary, as necessary training scenarios can be created instantly on the system itself. Secondly, it increases speed, as virtual vehicles can travel the same number of test kilometers in a few hours that would take a real car several weeks. Thirdly, the synthetic generation of data makes it possible for systems to process and react to changing and unpredictable situations. Ultimately, this will allow vehicles to navigate safely through changing and extreme weather conditions or make reliable forecasts of pedestrian movements – thus paving the way to higher levels of automation.
The ability to scale was one of the main drivers behind the conception of the Nvidia DGX. Through technology, machines can learn faster, better and more comprehensively than through any human-controlled method, with potential performance growing exponentially with every evolutionary step.
The supercomputer is located in a datacenter in Frankfurt, which has been chosen for its proximity to cloud providers and, more importantly, its AI-ready environment, fulfilling specific requirements regarding cooling systems, connectivity and power supply. Certified green energy is being used to power the computer, with GPU clusters being much more energy efficient than CPU clusters by design.