As the industry progresses through the six levels of driving automation, each step requires increased processing capabilities inside the car. Right now, intelligence in the IoT is moving closer to the network edge; we can expect the same to happen in the automotive industry. The sensors needed to enable ADAS and autonomy require higher resolution and need to process more data, more quickly and with less latency. The only way this can be done, really, is to put more processing capability as close to the sensor as possible, within the future zonal architectures of cars.
If vehicles are to become more autonomous, they need to understand the surrounding environment. Sensors provide that capability. The three key sensor modalities are radar, lidar and vision. Each has its relative strengths and weaknesses, which means all three will need to operate together. Combining multiple different sensor technologies in a sensor fusion system provides true redundancy for functional safety and improves sensing accuracy. The fourth element that will enable each element in the system to interoperate is connectivity. Together, these four elements provide the foundation for the era of autonomous vehicles.
Meeting the need for ASIL-certified IP
Autonomous vehicles feature multiple sensors that conform to one of three main types. Radar is expected to feature strongly, with 20 or more individual sensors distributed around a vehicle. As the technology best suited to providing high-resolution ranging data, lidar sensors will play a key role in autonomous driving. As the most mature technology, at least in terms of its use in automotive applications, image sensors will proliferate in autonomous vehicles. All three will be used in multiple ways, such as monitoring the terrain, other road users, pedestrians and weather conditions, as well as absolute and relative speeds. Sensors will also be used to monitor the vehicle's interior, for occupant detection and observation.