Got Self-Driving Architecture? Show Me

Got Self-Driving Architecture? Show Me

Technology News |
By Christoph Hammerschmidt

Judging from pitches we’ve heard so far from chip suppliers such as Nvidia, Mobileye and NXP, their conceptions of an autonomous car platform (and how they plan to get there) tend to diverge. As long as everyone’s jockeying for market position by leveraging what they already have and what they think can beat the others, that’s understandable.



Is this perhaps what today’s Google Car looks like inside? (Source: Kalray)


However, it’s important to remember that the challenges facing OEMs and tier ones are the same: a growing number of ECUs; a variety of sensors piling into autonomous cars; sensory data that need to be processed, analyzed and fused; and security — the pot of gold for connected cars. Then, there are still evolving factors such as advanced vision processing, deep learning and mapping that will affect processing power demanded in the new system architecture.

So, here’s the $64 million question. Do carmakers and tier ones today already know their autonomous car system architecture in 2020?

They don’t. At least, not yet, Eric Baissus, CEO of Kalray, told EE Times, in a recent interview here.

That’s why Kalray, a Grenoble-based startup, believes it has a good chance to move its Massively Parallel Processor Array (MPPA) processor featuring 288 VLIW cores into the market.

Kalray’s background is in extreme computing originally designed for nuclear bomb simulations at the CEA, Atomic Energy Commission, based in Grenoble, France. Today, Kalray is focused on the critical embedded market (aerospace); and cloud computing.

In Baissus’ mind, self-driving cars fall into the critical embedded market, because they absorb a lot of data coming in from external and internal parts of a vehicle, process it fast and then proceed to make quick decisions.

Baissus said that the automobile industry needs “a new generation of processors that will have the ability to handle multi-domain function integration and perform processing tasks at an extremely high level.”

Sure, the so-called “manycore revolution” has already come, Baissus said. “But nobody has successfully designed massively parallel ‘supercomputing on a chip’ with more than 100 cores.” Kalray’s newest generation 288 core processor, Bostan, integrates 16 clusters of 17 cores, 2MB shared memory (SMEM) at 80GB per second and 16 system cores.

Further, Bostan is a “time-critical enabled network-on-chip,” said Baissus, with a high-speed Ethernet interface (8×1 GbE to 10GbE). It’s capable of the “on-the-fly encryption and decryption,” and it offers “easy connection to GPU/FPGA accelerator.”  

As a result, the Bostan MPPA architecture can offer DSP-type acceleration that’s energy efficient and timing predictability, multi-domain support (different clusters, for example, could run different embedded operating systems used in different parts of a car), and scalable massively parallel computing (processors inside can be “tiled together to adapt to system complexity”).

Kalray’s Massively Parallel Processor Array Architecture.
(Source: Kalray)


Determinism and C/C++
But isn’t this “supercomputing-on-chip” pitch for autonomous cars similar to what Nvidia is promoting with its Drive PX?

Nvidia calls Drive PX “the world’s most advanced autonomous car platform,” combining deep learning, sensor fusion, and surround vision.

The big difference, as Baissus contends, lies in two things. First, Kalray’s solution is “certifiable.” By certifiable, he said, “I mean we can prove determinism, and we can guarantee timing,” he said. “In high-performance computing, one-second delay is OK. But in a critical embedded market — such as aerospace and automotive — 10 millisecond delay could be fatal.”

Second, for Nvidia’s chip, programmers need to know CUDA, he said. “Our chip can run standard C/C++ code using standard tools and Linux.” Automakers already have a lot of legacy code, algorithms written in C. Even when carmakers move onto the new autonomous car platform, legacy code will be important, Baissus explained.

Nvidia is not alone in anticipating the need for a lot more processing power. Mobileye upped the ante, recently, by “pre-announcing” EyeQ5. The company is promising to deliver engineering samples in 2018.

EyeQ5, designed in advanced 10nm or below-FinFET technology node, will feature eight multithreaded CPU cores coupled with eighteen cores of Mobileye’s next-generation vision processors. The company said that the EyeQ5 will produce more than 12 Tera operations per second, while keeping power consumption below 5W.

Nobody, including Baissus, is taking Mobileye lightly. Unlike Nvidia’s Drive PX, which many industry observers regard as a “test platform” for autonomous cars, Mobileye is going after the commercial market with increased processing power at a much lower power consumption level.

By leveraging its proven vision processing algorithms, EyeQ5 is now taking on data fusion — combining 20 external sensors (camera, radar or lidar) — into one.

But can EyeQ5 master the ECU inside the autonomous car? A Mobileye spokesman explained to EE Times that EyeQ5 will do not just “data fusion” but also “decision making.” But where that decision translates into action will take place elsewhere — on a “low-level ECU” chosen by automakers, he added.

Kalray is positioning the role of its manycore processor somewhat differently from Mobileye or Nvidia.

Need for a Super ECU?
Baissus told EE Times, “There have been a lot of advances made in sensors and machine-learning algorithms” necessary for autonomous cars. “But nothing has been really done in the processor domain.” This is where Kalray sees its opening.

In his opinion, the next-generation processors in autonomous cars need to perform functions well beyond data fusion. “They have to act more as open platforms,” he said. Kalray hopes to provide an autonomous vehicle open processing hub, which he calls a “Super ECU.”

The super ECU is capable of integrating dozens of multi-domain functions on the same die. It will bring superior results in such critical segments as “sensing, learning, security, network, safety and cost,” he explained.

Without naming names, Baissus told EE Times that leading car OEMs and Tier Ones are using current the Kalray platform to build their first prototype fleets. Baissus acknowledged that the autonomous car’s system architecture is “still not mature.”

But through collaborations with key players, Baissus hopes to learn more about carmakers’ needs, in defining Kalray’s new generation of solution for autonomous cars.

Asked about his business model, Baissus acknowledged that licensing its MPPA architecture to other automotive chip suppliers is also an option.

System architecture
Asked to compare how system architecture might evolve, Baissus shared the following diagrams.

System architecture of Car today (Level 1, Level 2).
(Source: Kalray)


Today’s cars have a collection of localized ECUs. They combine sensor, processing and control functions.


System architecture of Car tomorrow (Level 3, 4, 5).
(Source: Kalray)

Kalray hopes to offer a Master ECU that aggregates ECUs for density and cost and sensor data for smarter control. Note that he isn’t saying the Master ECU will do everything. If machine learning, for example, needs to accelerate its algorithm, the “master ECU will connect to accelerators when needed or a dedicated ECU,” Baissus explained.

In the end, Kalray believes its manycore architecture can shine in autonomous cars in numerous ways. It can run “dozens of different control and data processing algorithms in parallel and in real time.” Further, it can offer highly efficient machine learning. But most important, it offers very low latency.

— Junko Yoshida, Chief International Correspondent, EE Times


If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News


Linked Articles