The move makes Helm.ai’s high-end advanced driver assistance system (ADAS) software available to the ecosystem of customers and partners built around the Ambarella platform. This integration allows both companies to rapidly iterate on technical approaches that meet the latest automotive market needs, while offering combined hardware and software solutions to joint customers.
Helm.ai and Ambarella first demonstrated the initial integration of Helm.ai technology at CES 2020. At this year’s event, the two companies demonstrated an integration of the Helm.ai full 360° surround-view camera perception stack for L2+/L3 and autonomous driving on Ambarella’s automotive grade CV2FS SoCs. Today’s new integration leverages all hardware capabilities of the Ambarella CVflow architecture to optimize the performance of Helm.ai’s AI algorithms while meeting the accuracy goals expected from an automotive grade solution.
According to Helm.ai CEO Vlad Vorinniski Voroninski, the Ambarella hardware convinced him for its “dramatically low power consumption along with high performance of 5.5 TOPS (Tera Operations per Second). The company is also planning to port its software to Ambarella’s latest CV3 domain controller SoC family as part of its ongoing offerings for its joint customers and partners.
“Helm.ai’s novel approach to training AI systems for computer vision, called Deep Teaching, offers far-reaching implications for the future of computer vision and autonomous driving, as well as other industries such as robotics, aviation, manufacturing and even retail,” explained Ambarella President and CEO Fermi Wang. “This new integration combines Helm.ai’s high-end ADAS software and our CV2FS automotive grade AI SoC platform to offer industry-leading performance per watt. In the next stage of this collaboration, we are using Ambarella’s scalable CVflow platform to port Helm.ai’s advanced ADAS software onto our new CV3 SoC for single-chip perception of multiple sensors. This central domain controller SoC family can simultaneously process HD radar and vision captures, while providing fusion and path planning for ADAS to L4 autonomous vehicles.”