What Driving Policy means for autonomous cars

Article By : Junko Yoshida

Driving Policy is what teaches autonomous vehicles “human-like negotiation skills,” and it is critical for carmakers to nail this.

« Previously: Predictive vs reactive: Robo-car trends at CES 2017
 

At CES 2017, Mobileye’s co-founder, CTO and chairman Amnon Shashua, has discussed the “three pillars of autonomous driving”–namely, sensing, mapping and driving policy, and how the company is addressing all three.

Shashu defined “Driving Policy” as based on “deep network-enabled reinforcement learning algorithms.” As these algorithms form the basis of a new class of machine intelligence, they become capable of “mimicking true human driving capabilities,” he explained.

Calling it “the last piece of puzzle,” he explained that Driving Policy is what teaches autonomous vehicles “human-like negotiation skills.” Noting that “the society would not accept robotics to be involved in many fatalities,” Shashua explained how critical it is to nail this.

He said during the press briefing, “Many people describe four-way stops as one of the most difficult things” for autonomous cars to master, he said. “But I disagree.”

For four-way stops, you have rules of the right of way, he noted. In contrast, when autonomous cars must merge into traffic (whether it’s a lane merge or roundabout), “There are no rules. It’s much harder.”

[Robocar roundabout (cr)]
__Figure 1:__ *Art of merging into traffic at a roundabout*

The bottom line is that “planning is computing.” Cars with no plans before they merge into a lane end up creating a bottleneck. That’s why “autonomous vehicles need to sharpen their skills” to negotiate with other cars. In his opinion, it’s not sensing that can help robo-cars in this situation. “This is all about driving policy,” he said.

Driving policy is 'behaviour'

Phil Magney, founder and principal advisor for Vision Systems Intelligence (VSI), agreed that driving policy is behaviour and this is a hard problem. Mobileye will have to solve it by working with its partners (Intel, Delphi, etc). “Mobileye does not have the hardware to support driving policy and this is why they are joining up with Intel to develop driving policy for BWM using Intel SoC,” said Magney.

Roger Lanctot, associate director in the global automotive practice, at Strategy Analytics, told us, “Automakers need to build ‘driving policy’ into software code.”

This poses a challenge for insurance companies like Swiss Re (who was at the CES as one of the partners for NXP), said Lanctot, because they will have to figure out “how to underwrite silicon software” in cars to assess their safety.

Dual track: HOG & CNN

Evident at this year’s CES was technology suppliers' scramble to perfect traditional computer vision while grappling with advancements being made in neural networks.

As Phil Magney, founder and principal advisor for VSI, told us, several OEMs and Tier-one suppliers “are embracing AI for legitimate and practical applications, [but] only once served by deterministic algorithms [like HOG] where one size fits all.”

Liran Bar, director of production marketing in CEVA’s vision business unit, explained to EE Times, “Most SoC vendors we know are on a dual track.” They’re keeping two teams–one assigned for computer vision and another tasked to go advance deep learning–pitting them against each other, he explained. During the CES, CEVA, a supplier of DSP IP cores and SoC platform, announced that ON Semiconductor has licensed CEVA's imaging and vision platform for its ADAS product lines.

[Ceva deep neural network toolkit (cr)]
__Figure 2:__ *CEVA’s deep neural network toolkit (Source: Ceva)*

SoC designers want to keep their options open, Bar noted, because the CNN they know today could be vastly improved by the time carmakers actually roll out highly automated vehicles featuring a new SoC.

Chip designers need to build in “flexibility,” Bar explained. More importantly, though, they are looking for an easier way to convert software originally designed to run on floating-point architecture to those that can run on fixed-point. This is necessary because CNN execution on a lower power SoC demands fixed-point architecture. At CEVA, “We are offering a framework for that conversion, saving their time to do porting,” explained Bar.

Watchdog for autonomous vehicles

So, if safety functionality of AI-driven autonomous vehicles can’t be properly tested because of the non-deterministic nature of CNN, who’s going to watch if robo-cars are actually safely driving themselves?

French research institute Leti came to the CES this year to show off its low-power sensor fusion solution, called “Sigma Fusion.”

Julien Mottin, research engineer at Leti’s embedded software and tools laboratory, told us sigma fusion was designed to monitor safe autonomous driving.

[Leti sigmafusion (cr)]
__Figure 3:__ *Leti shows off Sigma Fusion demo*

Mottin stressed, “We believe in AI. We think AI is mandatory for highly automated driving.” But the inability to test the security of AI-driven cars–compliant with ISO26262–troubles designers. He explained that Leti’s team set out to work on the project with a clear goal in mind: “How to bring trust to computing in autonomous cars?”

Mottin envisions the Sigma Fusion chip to be embedded on the already certified ASIL-D automotive platform, serving as a watchdog. Or it could be integrated into the car’s black box.

Isolation from the rest of the automotive module makes it possible for Sigma Fusion to independently monitor what’s going on in the car. It “can’t explain why certain errors occurred in automated driving, but it can detect what has gone wrong—for example, an error happening in the decision path in a car,” he explained.

Sigma Fusion, compatible with any kind of sensor, can receive raw data directly from state-of-the-art sensors, the Leti researchers said. The version demonstrated at its booth gets data from image sensors and lidars, and fuses the data on an off-the-shelf microcontroller–in this case STMicroelectronics’ ARM M7-based MCU. The sensor fusion operation consumes less than 1 watt, 100 times more efficient than comparable systems, they added.

Leti plans to continue to develop Sigma Fusion by adding sensor technologies including lidar, radar, vision, ultra-sound and time-of-flight camera, into the system.

In essence, Sigma Fusion is designed to offer “safe assessment” of the free space surrounding the vehicle, and fast accurate environmental perception in real time, on a mass-market MCU, according to Diego Puschini, Leti researcher. The end game is to provide “predictable behaviour and proven reliability to meet the automotive certification process."

 
« Previously: Predictive vs reactive: Robo-car trends at CES 2017

This article first appeared on EE Times U.S.

Subscribe to Newsletter

Test Qr code text s ss