Nvidia has launched its latest Jetson System-on-Module for high performance machine learning with safety critical hardware .
The Jetson AGX Orin SoM measures 100mm x 87mm with six times the performance of Jetson AGX Xavier in a pin compatible form factor. The module is aimed at next generation autonomous delivery and logistics robots, factory systems and large industrial UAVs, ,moving f.
“Jetson AGX Orin is the most powerful AI edge computer, delivering up to 200 TOPS of AI performance for powering autonomous systems,” said Leela Karumbunathan, Hardware Product Manager, Autonomous Machines.
The module uses the Orin SoC with Nvidia’s Ampere architecture that supports sparse AI with GPU, ARM Cortex-A78AE CPU, next-generation deep learning and vision accelerators, and a video encoder and a video decoder. High speed IO, 204 GB/s of memory bandwidth, and 32 GB of DRAM enable the module to feed multiple concurrent AI application pipelines.
The Jetson AGX Orin uses an integrated Ampere GPU composed of two Graphic Processing Clusters (GPCs), eight Texture Processing Clusters (TPCs), 16 Streaming Multiprocessors (SM’s), 192 KB of L1-cache per SM, and 4 MB of L2 Cache. There are 128 CUDA cores and four TensorFLow accelerator, compared to the 64 CUDA cores for the previous generation Volta architecture. This provides a total of 2048 CUDA cores and 64 Tensor cores with up to 131 Sparse TOPs of INT8 Tensor compute, and up to 4.096 FP32 TFLOPs of CUDA compute.
- Nvidia announces new superprocessor for autonomous cars
- ARM adds functional safety IP to CPU and GPU architectures
- Synopsys takes on ARM over ASIL-D safety processors
The sparsity is supported with a fine-grained compute structure that doubles throughput and reduces memory usage
The biggest change in the CPU in Jetson AGX Orin is moving from Nvidia’s custom Carmel CPU clusters to the ARM Cortex-A78AE. The Orin CPU complex is made up of 12 2GHz cores, each with 64KB Instruction L1 Cache and 64KB Data Cache,