MENU

“We need standardized criteria for autonomous driving”

“We need standardized criteria for autonomous driving”

Interviews |
By Christoph Hammerschmidt



eeNews Europe: Recently, there has been the first lethal accident with an autonomous vehicle. Do you think that this accident will have an impact on acceptance and technology of auto-driving vehicles?

Amnon Shashua: It is hard to say how things will play out. I think it is too far into the development to stop it. What is necessary is making the way these machines make their decisions more transparent. Today there is no transparency. The only transparency you have is how many miles they have driven and how many “incidents of disengagement occurred”, with disengagement meaning that the safety driver had to take over. This kind of measure is very weak and not informative because I can get a low disengagement rate by simply driving around my house. In order to properly create an autonomous vehicle, you have to drive in challenging situations where the disengagement rate at first will be high. To get more transparency, we need to define safety in a way that regulatory bodies, industry actors, technology providers all can agree on a standard – how you define the safety of your decisions, what does it mean to be in a dangerous situation and to act properly to get out of such a situation.

eeNews Europe: How can such a standardization be initiated within the industry and which would be the criteria?

Shashua: We have developed a formal model called Responsibility Sensitive Safety (RSS) This model is our attempt to put something solid and mathematical as a starting point for a conversation on standardization. RSS does not favor or disadvantage anyone’s technology. We’re not standardizing an algorithm.


What we want to standardize is the criteria what it means to be in a dangerous situation and what does it mean to get out of such a situation. Then you can do whatever you what to do in terms of the decision-making to meet these criteria as long as you are certified that your model certifies these kinds of definitions.

eeNews Europe: Is it possible to define such precise and comprehensive definitions? After all, these definitions would have to define everything, every situation, right?

Shashua: During our research to create a formal model we found out that we can reduce the complexity of driving into four principles, and everything can be reduced into these four principles. There is the NHTSA (National Highway Traffic Safety Administration) crash typology: They took 6 million crashes and divided them into 37 scenarios. We took all these 37 scenarios, ran them through our model and found that it complies with human judgement. We are now looking for additional scenarios from other bodies, not only NHTSA, and so far, all the studies we have been doing certify the model that it really reflects human judgement in terms of who is to blame for an accident.


We are open to adding or modifying certain definitions. What we are saying is that it is time to engage with the regulatory bodies in order to standardize these kinds of setup definitions.

eeNews Europe: Beyond the discussion about autonomous cars, there are lots of other interesting aspects regarding Mobileye’s business. One year and half ago, Mobileye, Intel and BMW agreed to collaborate to create the “car of the future”. Can you give us an update on how far the things have been developed?

Shashua: It is moving very well. So far, we have collected tens of petabyte of data with our 40 test cars. We have set up the sensor configuration, how the cameras are going to be placed. This required a lot of effort to understand the correct placement and the optical paths. There has been a number of other sensor-related decision, for example who are the lidar and radar suppliers. All of this went as planned. Throughout the year we will move the center of this activity to the US. The idea is to gradually map the city of Santa Clara from Intel’s headquarters in concentric circles in order to facilitate autonomous driving within this city. This will happen until the end of 2018. During the same time frame, we will finish the software stack of the sensing and driving policy and the RSS, and in 2019 comes the production hardware. Our EyeQ5 chip will come out the end of this year, and in 2019 all the cars will be equipped with production hardware.


eeNews Europe: One year ago, Mobileye became part of Intel. How did this affect your scope of topics and technologies?

Shashua: The merger allowed us to expand and cover more parts of the development chain of autonomous vehicles. For example, one of those things we are doing today and did not do before is that we are building a fleet of a hundred cars. These hundred cars will have multiple purposes. One purpose is to practice in different locations like Jerusalem, Santa Clara, or Arizona. Another purpose is to continue collecting data. Another one is that all the technologies we are thinking of, from sensing to actuation to driving policy to safety, communication, mapping all of them are being uploaded to these test vehicles. This is a huge effort from a logistical point of view – an effort too big for a company the size of Mobileye, but it is appropriate to a company the size of Intel.

eeNews Europe: There is a discussion going on in the industry on how in-car computing architectures need to be further developed to accommodate the needs of Autonomous driving. Therefore, everybody is talking about heterogeneous multi-core architectures. What is your opinion about future processor architectures, and what are Mobileye’s plans?

Shashua: High-performance computing is an area Mobileye has been playing successfully for more than a decade. Our technology is not based on CPU architectures. Our technology sits on SoCs that we designed over the years that are called EyeQ. On the road now is the 3rd generation, EyeQ3. Within the next two months, both BMW and Volkswagen as well as Nio and I think two more OEMs will come out with our fourth generation chip, EyeQ4. These are very powerful chips with very low power consumption, and they are very heterogeneous in terms of the software stack they can run. EyeQ4 has 18 different cores – eight CPUs, the other ten are specialized accelerators that allow for running optimized code of computer vision, AI, and deep neural networks. And then the EyeQ5, which is 10x stronger than the EyeQ4, will come out in September this year with first silicon.


In 2019 we will have hardware based on EyeQ5 for running autonomous vehicles, and in 2020 / 2021 it will be in volume production. Together with Intel we are now designing the sixth-generation chip, EyeQ6, which will have also Intel Atom cores inside in addition to very powerful accelerator cores. Just to give you an idea: The EyeQ5 is going to run 24 Tera operations per second of deep learning at only 10 watts of power. So, for example, an entirely autonomous driving vehicle with sensing, driving policy, actuation, mapping, communication – all of this would be running between two to three EyeQ5s, that’s it. EyeQ6, the generation coming after that, will offer five times more computing power and one single chip like this would power an entire car.

eeNews Europe: You are basically talking about a kind of central computer for cars?

Shashua: Yes, exactly. Based on technology that we have been developing for more than a decade. The EyeQ1, the first-generation chip, was launched in volume production on BMW, on GM vehicles and Volvo, back in 2007. This is exactly our expertise. Not only the software, but also the silicon.

Related articles:

GPU architecture not suited for AI, says Xilinx CEO

Hardware-virtualized GPU drives six 4K screens in HUD, cluster and infotainment

Scalable image sensor platform targets ADAS and autonomous driving

Israel evolves into an automotive electronics hotbed

Delphi joins autonomous driving project of BMW, Intel and Mobileye

 

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News

Share:

Linked Articles
10s