MENU

Safety critical automotive image signal processor targets driverless cars

Safety critical automotive image signal processor targets driverless cars

New Products |
By Nick Flaherty



ARM has signed up Mobileye for an image signal processor (ISP) designed for safety critical automotive designs for driver assistance and driverless cars.

The Mali-C78AE can be combined with the Cortex-A78AE CPU and Mali-G78AE GPU for a safety-critical ADAS vision pipeline

Mobileye is first to license Mali-C78AE, in addition to Mali-G78AE, for its next-generation EyeQ technology. The company, a subsidiary of Intel and a leading developer of systems for driverless cars, previously used the MIPS processor core.

ADAS applications such as collision avoidance, lane departure warnings and automated emergency braking, and vehicles increasingly rely on multiple cameras positioned around the car to enable many of these features.

The Mali-C78AE ISP core is designed specifically to address both human and machine vision safety applications, and is able to process data from up to four real-time or 16 virtual cameras. The safety features allow it to meet ASIL B/SIL 2 for diagnostic requirements and ASIL D/SIL 3 for the avoidance of systematic failure

“We know safety is paramount in ADAS,” said Chet Babla, VP of automotive, Automotive and IoT Line of Business at ARM. “A fault or failure in operation of an ADAS system could be dangerous, threatening the wellbeing of the driver, passengers, and other road users. Mali-C78AE was developed from the ground up with hardware safety mechanisms and diagnostic software features enabling system designers to meet ISO 26262 ASIL B functional safety requirements. Mali-C78AE aims to prevent or detect faults in a single camera frame that may result in incorrectly processed frame data. To do this, the ISP features over 380 fault detection circuits, continuous built in self-test, and can detect sensor and hardware faults of connected cameras.”

The ISP core is designed for the requirement that should take 150 milliseconds to acquire an image at the sensor, process it through the ISP then GPU, and display it on a screen for the driver; anything longer is noticeable to the driver when using parking assist, for example. In a machine vision application, a vehicle should not travel more than 250mm between a camera image being acquired and it being presented to the decision-making processing and anything longer means the machine vision system is too slow to react in driving situations where accurate and timely decisions are critical.

The Mali-C78AE also uses advanced noise reduction technology and dynamic range management to ensure each frame is clear and properly exposed by adjusting overly dark or bright areas of a frame. It can perform real-time processing of camera data from up to four high-resolution-high-frame rate cameras, significantly reducing the memory, communications, and processing requirements, making for a more efficient system.

This can also reduce the cost of implementing multiple ADAS functions, says Babla. The Mali-C78AE enables camera sensors to be dual-purpose by downscaling and colour-translating the outputs of sensors optimized for machine vision to create images adapted to the human eye. By avoiding duplication in cameras and their associated electronics and wiring, developers save on cost and complexity.

“A strong vision pipeline is increasingly important to powering the next phase of mass market ADAS deployment,” said Babla. “The vehicle is one of the most complex electronics-enabled devices consumers will buy, and it comes with several constraints the automotive industry must adhere to in order to continue to improve driver safety and user experience.”

www.arm.com

Related articles

Other articles on eeNews Europe

 

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News

Share:

Linked Articles
10s