NXP Introduces Auto-Grade AI Toolkit for AVs

Article By : Junko Yoshida

NXP launched a deep learning toolkit called eIQ Auto. NXP is seeking to set itself apart from competitors by making its tool "automotive-quality." NXP's goal is to make it easier for AV designers to implement deep learning in vehicles.

NXP Semiconductors rolled out this week a new deep learning toolkit called eIQ Auto. NXP is seeking to set itself apart from competitors by making its tools “automotive-quality.” NXP’s goal is to make it easier for AV designers to implement deep learning in vehicles.

The development of autonomous vehicles (AV) does not necessarily require either artificial intelligence or deep learning. Simply put, not all AVs need to be AI-driven. And yet the rapid advancements and improved accuracy of deep learning are alluring to developers seeking to improve their highly automated vehicles.

The difficulty of validating safety of AI-driven AVs persists, however. Safety researchers worry about the “black-box” nature of deep learning, and that’s only one of several thorny issues. It remains uncertain whether AV designers can verify and validate a continuously learning AI system, or if an AI feature, once deployed in dedicated hardware inside a vehicle, will behave the same as when it was developed and trained on a larger, more powerful computer system.

Despite such concerns, both AV and safety communities recognize that AI is not a topic they can avoid discussing.


Recommended
UL 4600 Draft Puts Safety Onus on AV Hopefuls


With the release of draft specifications for UL 4600 last week, Phil Koopman, CTO of Edge Case Research, told us, “We will go after full autonomy head-on.”

UL 4600, a safety standard for evaluating autonomous products currently under development at Underwriters’ Laboratories, neither assumes nor mandates that deep learning should be deployed inside AVs. But the standard covers the validation of any machine-learning and other autonomy functions used in life-critical applications.

Automotive-grade software tool kit for deep learning
Against this backdrop, NXP Semiconductors introduced its eIQ Auto deep learning toolkit.

“Most deep-learning frameworks and neural nets developed thus far are used for consumer applications such as vision, speech and natural language,” observed Ali Osman Ors, director, Automotive AI Strategy and Partnerships at NXP Semiconductors. They are not necessarily developed with life-critical applications in mind.

Click here for larger image
(Source: NXP)
Click here for larger image

(Source: NXP)

NXP, a leading automotive chip supplier, is going a step further by making its software toolkit compliant with Automotive Software Performance Improvement and Capability dEtermination (A-SPICE). A-SPICE is a set of guidelines developed by German automakers to improve software development processes.

NXP explained that its eIQ Auto tool set — designed specifically for NXP’s S32V234 processor — will help AV developers “optimize the embedded hardware development of deep learning algorithms and accelerate the time to market.”

Asked if there are similar auto-grade tool kits available for deep learning, Ors said, “Some car OEMs might have designed their own tools in-house. But as far as I know, I haven’t seen other automotive chip vendors offering automotive-quality software tool kits like ours for deep learning.”

Pruning, quantization and compressing
The process of data preparation and training (learning) and the process of AI deployment (inference) on embedded systems are well understood.

Today, AV developers are said to be collecting data at 4Gigabytes per second as their test vehicles drive on public roads. Cleaning up and annotating such a huge amount of data and prepping it for training data can be very costly. In some cases, the data-labeling processing alone can financially cripple algorithm developers and AV startups.

But equally challenging to AV designers, though little discussed publicly, are the arduous tasks involved in optimizing a trained AI model and converting it for deployment (on inference engines). Ors explained that NXP’s tool accelerates the process of “quantization, pruning and compressing” the neural network.

By pruning, Ors means removing redundant connections present in the neural net architecture, by cutting out unimportant weights. Of course, a new “pruned” model will lose accuracy. Hence, the model has to be fine-tuned after pruning to restore its accuracy.

Next up, quantization creates an “efficient computing process,” said Ors. It involves bundling weights by clustering them or rounding them off so that the same number of connections can be represented using less memory. Another common technique is converting floating-point weights to fixed point representation by rounding off. As with pruning, the model must be fine-tuned after quantization.

AV designers evaluate the accuracy of the converted model by running test data (which the deep learning system hasn’t seen before) and further fine-tune the model.

Click here for larger image
(Source: NXP)
Click here for larger image

(Source: NXP)

Partitioning workload
Beyond that, eIQ Auto, said NXP, “partitions the workload and selects the optimum compute engine for each part of the neural network.” It speeds the process of hand-crafting the inference engine, because the tool can help AV designers figure out which tasks run best among CPU, DSP or GPU, Ors explained. He said that eIQ Auto can’t be used for non-NXP devices, as the tool must be intimately familiar with what’s happening inside the processor.

eIQ Auto also comes with interfaces to training frameworks and model formats such as TensorFlow, ONNX, Caffe, Pytorch, in addition to model optimization and usage tools (scripts, compiler toolchains) and runtime libraries (C/C++, vector DSP, NEON).

In sum, the toolkit’s aim is to help customers move quickly from a development environment to AI implementations that meet stringent automotive standards.

Where AI is applied inside AV
Today, the most prevalent AI application inside a vehicle is “vision,” using neural networks to classify objects on images. Vision is also used for driver and cabin monitoring, face identification and occupancy detection.

Click here for larger image
(Source: NXP)
Click here for larger image

(Source: NXP)

Other potential AI applications in automotive include radar. Future radars are expected to use neural networks to classify road users based on its imagery. Ors noted that the use of AI in radar applications remains undeveloped, however. “There is a higher barrier to entry,” said Ors, “due to the regulations related to the use of radar as a sensor.” Compared to CMOS image sensors, radars are also expensive, he added. “That means that radar data [that] can’t be easily obtained limits the available datasets.”

AI is also expected to be applied to data fusion — vision with radar, for example. But again, the industry hasn’t reached consensus on when to fuse two species of sensory data. “Early fusion vs. late fusion is still being debated,” said Ors.

Today, most test AVs come with power-hungry hardware — which is not ideal for volume automotive production. NXP hopes its new eIQ toolkit enables customers to deploy powerful neural nets “in an embedded processor environment with the highest levels of safety and reliability.”

 

Subscribe to Newsletter

Test Qr code text s ss