When creating the sensor data visualization, a graphical editor simplifies the assignment of information from the application data objects (position, size, color coding per object type) to the graphic objects. Several predefined 2D/3D shapes and a point cloud object for Lidar/ToF cameras make it possible to display the detected sensor objects, environmental contours and fusion results in a short time without or with minimal coding effort. Once the graphic objects have been data-data, they can be displayed in several windows synchronized.
In order to interpret sensor data and fusion results correctly, the developer has a range of new display options at his disposal. The hardware-accelerated 3D scene window of vADASdeveloper, for example, displays extensive environment models and reference data. Vehicle positions and location-dependent information are displayed in the OpenStreetMap-based map window. Reference cameras for capturing the real vehicle environment can now be calibrated semi-automatically. The video image recorded with this can be superimposed with the objects detected by the ECU. Using this visual alignment of the object information supplied by the sensors or algorithms with the real environment, the developer verifies the object recognition algorithms quickly and reliably.
In addition to the already possible connection of radar sensors or object cameras via vehicle networks, vADASdeveloper now also reads the Lidar sensors from ibeo LUX, ibeo. HAD and Velodyne Lidar (VLP-16/HDL-32E) directly in. The scalable, decentralized recorder solution already introduced in the CANape measurement and calibration tool is used to record the steadily increasing quantities of sensor data during road tests. This enables measurement data rates of over 1 gigabyte per second for XCP-on-Ethernet, video and radar raw data.
More information: www.vector.de/vadasdeveloper