eeNews Europe: What exactly does Baselabs do? Do you produce development tools, or does your software run as middleware directly in the cars?
Eric Richter: Baselabs sees itself as a software producer, especially for autonomous driving. This means that we provide tools and libraries that enable our customers to develop their systems for autonomous driving faster and more efficiently.
eeNews Europe: Libraries – that is, ready-made modules and function blocks that contain their own functions, so that they don’t just help developers write software themselves?
Richter: Exactly. The libraries already contain a lot of functionality, many algorithms, especially for data fusion. Because a core technology, the heart of autonomous driving, is recognition and perception of the vehicle environment in order to derive actions from it: Braking, accelerating, changing lanes, etc. All this is recorded by the environment sensors. The big task to be solved is to combine the variety of sensors – radar, lidar, cameras, etc. – with each other. This is data fusion. This is the core of our software and it runs directly in the cars.
eeNews Europe: What role does Vector play for you?
Richter: Vector is a shareholder of Baselabs with a minority stake. For us this means that we are independent of all OEMs and Tier Ones, because Vector itself is independent of OEMs and Tier Ones. The second important point for Baselabs: Vector has many years of experience in the automotive sector, both in pre-development and series development, and can support us with expertise. In addition, we gain access to Vector’s worldwide sales network.
eeNews Europe: Thus you are independent of OEMs, but not of Vector. Vector has a significant market share.
Richter: Vector has only a minority interest in the capital of Baselabs and therefore also in the voting rights. The majority of voting rights is held by the four founders.
eeNews Europe: This means that they can also cooperate with Vector competitors – such as Elektrobit, to name just one.
Richter: Exactly. We are already doing that. We work with various companies, including Vector’s competitors. Some of our products are compatible with various competitor products from Vector. This is what both sides want and support within the framework of our cooperation.
eeNews Europe: Are there any standards for data formats and interfaces in the area in which you are active? For example, if you supply Ford today and Audi or Porsche tomorrow – do OEMs work with standardized data formats and function calls? We see a very heterogeneous sensor environment, and the computing platforms are also very different. Do you have to reinvent the wheel every time?
Richter: From a formal point of view, you’re right. Of course, there are quasi-standards where sensor interfaces are very similar, for example in the radar area. However, there are various activities to standardize the sensor interfaces. One of them is part of Autosar, the middleware for series production. There is a special working group there in which Baselabs is involved and which is managed by Baselabs. This group has set itself the task of developing sensor interfaces for automated driving functions and making them available to the community. We are also involved in other activities to promote standardization in the field of data fusion in order to further reduce the costs and development time of future systems. Standards in data fusion will come, everyone will benefit.
eeNews Europe: Talking about radar. I assume that the radar sensor of a company X provides a different structure of a point cloud than the one of company Y. With cameras, it is perhaps even more pronounced, since preprocessing is already partly carried out in the camera. Developers then have to deal with completely different data.
Richter: Exactly. This is an important issue. There are different data levels for each sensor; even the terms are not exactly defined. Many then speak of raw data or feature level data, detection level data and object level data – these are the usual three to four levels that are distinguished. The exact idea differs slightly from manufacturer to manufacturer. For us it is important to take a close look at what level a sensor delivers. The two highest levels – object level and detection level – have existed for the longest time; this is where we have already made the most progress with our product range. Newer approaches, which we are also developing at Baselabs, such as the Dynamic Grid, a new algorithmic procedure, primarily address the lower levels, i.e. feature levels and raw data.
eeNews Europe: Dynamic Grid? Please explain.
Richter: This is our term for this process group. The background: You have to reliably determine the free space around the vehicle in order to calculate the trajectory you want to travel. So far, occupancy grids have mainly been used here. However, these methods have some decisive disadvantages. Above all, they are not able to distinguish between static and dynamic objects. At higher SAE autonomy levels defined, from Level 3 and above, i.e. things like motorway pilot ADAS, this causes difficulties. This is where this new process group, which we call Dynamic Grid, comes in. For each space element, per grid cell, not only is it determined whether this cell is occupied by another vehicle, but it is also determined in which direction the object is moving and at what speed. Thus, this method helps to distinguish between dynamic and static objects and can directly process point clouds from lidar sensors or HD radar images.
eeNews Europe: How will Sensor Data Fusion develop? We are dealing with ever more powerful computer platforms that deliver ever larger amounts of data. Will this area reach its limits at some point? Must the peripheral assemblies become more intelligent so that they can reduce their data volumes?
Richter: We at Baselabs believe that the centralization of data fusion will continue. There are already some manufacturers like Audi that are going down this path – with central, high-performance computing platforms to also make a central data fusion. Yes, the amount of data will continue to increase, but so will the data quality that is processed there. In my opinion, the trend will go towards raw data fusion, especially if you address higher-value driving functions. Especially from autonomy level 3 onwards we see a clear trend towards raw data fusion in central fusion control units.
eeNews Europe: Would artificial intelligence be a possible approach to take over tasks in the field of data fusion?
Richter: This is quite conceivable and currently the subject of a lot of research. At present, however, methods based on artificial intelligence can be applied almost exclusively to the data of individual sensors such as cameras. For data fusion, tracking and plausibility checking of data from several sensors such as camera, radar and lidar, classical data fusion approaches as downstream components are still very effective and thus the ideal complement to AI-based methods. Another big question in the use of artificial intelligence is how to secure the system, how to arrive at a security architecture for such systems.
eeNews Europe: Is this to be understood as meaning that artificial intelligence is not compatible with the functional safety standard ISO 26262, which always tends to enforce a very deterministic behavior?
Richter: Exactly. There are a lot of discussions about ISO 26262 in this area – it is not perfectly designed for the use of artificial intelligence. Irrespective of ISO 26262, the question will soon arise as to how much artificial intelligence can be trusted in safety-critical systems and how long a parallel backup path can be maintained. It is quite clear that in some scenarios AI-based procedures will provide better performance and thus, for example, enable more comfortable driving. This is undisputed, but it is questionable whether they can always achieve this. Here we say, like many others in the engineering community, that we don’t know this 100% yet. For safety reasons, therefore, we must continue to run classic data fusion processes in parallel. This makes it possible to establish a safe state if the two procedures are not in agreement. For this, the classic procedures are still needed – at least that is how we and large parts of the community regard it.
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.