Hungarian machine learning technology developer AIMotive has started shipping a version of its accelerator IP to inference engines in automotive chip designs.
The aiWare3 NN (Neural Network) hardware inference engine IP is aimed at Level 2 and Level 3 machine learning chip designs for handling camera feeds with high definition images and above. The IP can be used either as an accelerator inside a system on chip or as a separate chip.
The key is that the IP block uses a tile-based array of multiply accumulate (MAC) units controlled by a state machine. This has been designed from the ground up for automotive design and provides a more deterministic implementation than other approaches and allows ASIL-B operation of the resulting chips.
Budapest-based AIMotive has also revised its software development kit (SDK) to include the performance estimator that is already used in the compiler for scheduling. Making this available to designers allows them to assess how a neural network would perform on an IP instantiation.
“Our production-ready aiWare3P release brings together everything we know about accelerating neural networks for vision-based automotive AI inference applications;” said Marton Feher, senior vice president of hardware engineering for AImotive. “We now have one of the automotive industry’s most efficient and compelling NN acceleration solutions for volume production L2/L2+/L3 AI. When complemented by AImotive’s significant algorithmic, safety and production expertise for automated driving, we believe we offer our customers the most technology-rich automotive-focused solutions available today”.
Each aiWare3P hardware IP core offers up to 16 TMAC/s (>32 TOPS) at 2GHz, with multi-core and multi-chip implementations capable of delivering up to 50+ TMAC/s (>100 INT8 TOPS) – ideal for multi-camera or heterogeneous sensor-rich applications. The core is designed for AEC-Q100 extended temperature operation, and includes a range of features to enable users to achieve ASIL-B and above certification.