Do carmakers know what their self-driving car architecture will be in 2020? – Part 1

Article By : Junko Yoshida

Do carmakers and tier ones today already know their autonomous car system architecture in 2020? EE Times' Junko Yoshida explores various autonomous car system architectures in this two part series.

Detail your self-driving car architecture and what you expect it to look like in 2020. Then draw us a block diagram.

Judging from pitches we’ve heard so far from chip suppliers such as Nvidia, Mobileye and NXP, their conceptions of an autonomous car platform (and how they plan to get there) tend to diverge. As long as everyone’s jockeying for market position by leveraging what they already have and what they think can beat the others, that’s understandable.

[kalray autonomous car]
__Figure 1:__ *Is this perhaps what today’s Google Car looks like inside? (Source: Kalray)*

However, it’s important to remember that the challenges facing OEMs and tier ones are the same: a growing number of ECUs; a variety of sensors piling into autonomous cars; sensory data that need to be processed, analysed and fused; and security — the pot of gold for connected cars. Then, there are still evolving factors such as advanced vision processing, deep learning and mapping that will affect processing power demanded in the new system architecture.

So, here’s the $64 million question. Do carmakers and tier ones today already know their autonomous car system architecture in 2020?

They don’t. At least, not yet, Eric Baissus, CEO of Kalray, told EE Times, in a recent interview here.

[Eric Baissus]
__Figure 2:__ *Kalray CEO, Eric Baissus*

That’s why Kalray, a Grenoble-based startup, believes it has a good chance to move its Massively Parallel Processor Array (MPPA) processor featuring 288 VLIW cores into the market.

Kalray’s background is in extreme computing originally designed for nuclear bomb simulations at the CEA, Atomic Energy Commission, based in Grenoble, France. Today, Kalray is focused on the critical embedded market (aerospace); and cloud computing.

In Baissus’ mind, self-driving cars fall into the critical embedded market, because they absorb a lot of data coming in from external and internal parts of a vehicle, process it fast and then proceed to make quick decisions.

Baissus said that the automobile industry needs “a new generation of processors that will have the ability to handle multi-domain function integration and perform processing tasks at an extremely high level.”

Sure, the so-called “manycore revolution” has already come, Baissus said. “But nobody has successfully designed massively parallel ‘supercomputing on a chip’ with more than 100 cores.” Kalray’s newest generation 288 core processor, Bostan, integrates 16 clusters of 17 cores, 2MB shared memory (SMEM) at 80GB per second and 16 system cores.

Further, Bostan is a “time-critical enabled network-on-chip,” said Baissus, with a high-speed Ethernet interface (8×1 GbE to 10GbE). It’s capable of the “on-the-fly encryption and decryption,” and it offers “easy connection to GPU/FPGA accelerator.”

As a result, the Bostan MPPA architecture can offer DSP-type acceleration that’s energy efficient and timing predictability, multi-domain support (different clusters, for example, could run different embedded operating systems used in different parts of a car), and scalable massively parallel computing (processors inside can be “tiled together to adapt to system complexity”).

[Kalray's Massively Parallel Processor Array Architecture]
__Figure 3:__ *Kalray’s Massively Parallel Processor Array Architecture. (Source: Kalray)*

Find out how Nvidia combines deep learning, sensor fusion, and surround vision in its Drive PX car computer in the second part of the article here..

Subscribe to Newsletter

Test Qr code text s ss