MENU

Centralised signal processing, AI open up new possibilities for 4D imaging radar

Centralised signal processing, AI open up new possibilities for 4D imaging radar

Technology News |
By Christoph Hammerschmidt



One central processor instead of many radar front ends: Ambarella provides car OEMs with new design options.

Until now, the use of radar for environmental detection in autonomous or assisted driving requires numerous front ends and MIMO antennas. Californian chip manufacturer Ambarella is breaking with this paradigm: only one central processor now processes all radar signals. This opens up new design possibilities for car manufacturers.

Current vehicle radar systems rely on a large number of antennas integrated into the bumpers. They deliver gigantic amounts of data on the order of terabits per second. This data is preprocessed and reduced on the spot. This is done by processors installed in the radar circuits directly next to the antennas. The high number of these chips drives up the price as well as the energy consumption. Ambarella now wants to radically simplify this complex system: Using AI technology from its acquisition Oculii and the newly developed CV3 central processor, the company is enabling all radar data to be processed at a central point in the vehicle. This not only simplifies the architecture, but also significantly improves the resolution of the radar data. 

Ambarella’s new architecture allows both central processing of raw radar data and deep, low-level fusion with other sensor inputs—including cameras, lidar and ultrasonics. This breakthrough architecture provides greater environmental perception and safer path planning in AI-based ADAS and L2+ to L5 autonomous driving systems, as well as autonomous robotics. It features Ambarella’s Oculii radar technology, including the only AI software algorithms that dynamically adapt radar waveforms to the surrounding environment—providing an angular resolution of 0.5 degrees, an ultra-dense point cloud up to tens of thousands of points per frame and a long detection range of 500 metres and more. This is achieved with an order of magnitude fewer antenna MIMO channels, which reduces the data bandwidth and achieves significantly lower power consumption than competing 4D imaging radars. According to Ambarella, this approach provides a flexible and high performance perception architecture that enables system integrators to future proof their radar designs.

To create this architecture, Ambarella optimized the Oculii algorithms for its CV3 AI domain controller SoC family in 5nm geometriesand added specific radar signal processing acceleration. The CV3’s industry-leading AI performance per watt offers the high compute and memory capacity needed to achieve high radar density, range and sensitivity. A single CV3 can efficiently provide high-performance, real-time processing for perception, low-level sensor fusion and path planning, centrally and simultaneously, the company promises.

The data sets of competing 4D imaging radar technologies are too large to transport and process centrally. They generate multiple terabits per second of data per module, while consuming more than 20 watts of power per radar module, due to thousands of MIMO antennas used by each module to provide the high angular resolution required for 4D imaging radar. That is multiplied across the six or more radar modules required to cover a vehicle, making central processing impractical for other radar technologies, which must process radar data across thousands of antennas. Ambarella’s alternative to this demanding, complex system is low-level signal sensor fusion. “It enables users to dynamically allocate processing around the car”, an Ambarella spokesperson explained. In practice, this means that the system can change the scanning characteristics of the vehicle radars depending on the application – for example, when parking, resolution, field-of-view and range can be set differently than when driving on expressways.

Dynamic adaption of radar characteristics

By applying AI software to dynamically adapt the radar waveforms generated with existing monolithic microwave integrated circuit (MMIC) devices, and using AI sparsification to create virtual antennas, Oculii technology reduces the antenna array for each processor-less MMIC radar head in this new architecture to 6 transmit x 8 receive. Overall, the number of MMICs is drastically reduced, while achieving an extremely high 0.5 degrees of joint azimuth and elevation angular resolution. In addition, the centralized architecture consumes significantly less power and reduces the bandwidth for data transport by 6x, while eliminating the need for pre-filtered, edge processing and its resulting loss in sensor information.

Competing Edge-Processed Radar

Ambarella’s Centralized Radar Processing

Constant, repeated radar waveforms without regard for environmental conditions

Oculii™ AI software algorithms dynamically adapt radar waveforms to surrounding environment

MMIC + edge radar processor in module

MMIC only in “radar head”

Radar detection processing in radar module

Radar detection processing in central processor

Multiple terabits per second, per module of radar data (too large to transport and process centrally)

6x bandwidth reduction for radar data transport

1+ to 2 degree resolution

0.5 degrees of joint azimuth and elevation angular resolution

High power consumption, due to 1000s of antenna MIMO channels used by each radar module

Low power consumption, due to order of magnitude fewer antenna MIMO channels (6 transmit x 8 receive antennas in each processor-less MMIC radar head)

No dynamic processing allocation (specified for worst-case scenarios)

Dynamic allocation of CV3’s processing resources, based on real-time conditions, between sensor types and among sensors of same type

Slow processing speeds

CV3 is up to 100x faster than traditional edge radar processors

Table: Comparison of the different approaches to radar processing

The CV3 also marks the debut of Ambarella’s next-generation CVflow architecture, with a neural vector processor and a general vector processor, which were both designed by Ambarella from the ground up to include radar-specific signal processing enhancements. These processors work in tandem to run the Oculii advanced radar perception software with higher performance, including speeds up to 100x faster than traditional edge radar processors can achieve.

Target applications include ADAS and level 2+ to level 5 autonomous vehicles, as well as autonomous mobile robots (AMRs) and automated guided vehicle (AGV) robots. These designs are streamlined by Ambarella’s unified and flexible software development environment, which provides automotive and robotics designers with a software-upgradable platform for scaling performance from ADAS and L2+ to L5.

The company plans to demonstrate the new architecture at CES 2023.

www.ambarella.com

 

Related articles:

Infineon, Oculii join forces on automotive radar software

DSP core for radar, lidar and comms processing

Hyundai Mobis invests in HD imaging radar startup

New radar sensors poach in the lidar area

Renesas enters automotive radar transceiver business

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News

Share:

Linked Articles
10s