The road ahead: Are we ready for an AI driver? – Part 1

Article By : Junko Yoshida

In the human-machine interface in vehicles, AI will play a role in speech and gesture recognition, eye-tracking, driver monitoring and natural language interfaces, according to IHS Technology.

Product shipments of artificial intelligence (AI) systems for vehicles will grow from 7 million in 2015 to 122 million by 2025, which, according to IHS Technology, is a reflection of the automotive industry's growing appetite for AI.

The market research firm expects the attach rate of AI-based systems in new vehicles to increase from 8% in 2015 (the vast majority of today’s AI systems in cars are focused on speech recognition) to 109% in 2025. IHS sees multiple AI systems of various types to be installed in many cars.

In the human-machine interface in vehicles, IHS believes AI will play a role in speech and gesture recognition, eye-tracking, driver monitoring and natural language interfaces. In the autonomous car, AI will advance machine vision systems, while it will also migrate in sensor fusion electronic control units (ECU).

In a phone interview with EE Times, Luca De Ambroggi, principal analyst, automotive semiconductors at IHS told us, “AI is viewed as a key enabler for real autonomous vehicles. Everyone in the automotive supply chain is getting pretty bullish.”

But seriously, how ready are we now for the day when we depend on AI to drive autonomous cars?

On the AI algorithms that decipher complex traffic situations, De Ambroggi was blunt: “We’re not there yet. What we can do now is still very limited.” But the status quo is changing fast. He said that technology advancements in AI will be “in the steady state in the next 10 years,” bearing fruit for automotive systems to take advantage of.

EE Times asked the IHS analyst to break down automotive AI, including its advancements, applications inside vehicles, and the hardware available to process AI algorithms. We also discussed how the industry plans to “certify” AI in the future—just as human drivers must show their ability to drive a car.

Followings are excerpts from our conversation with De Abroggi.

When do you think AI crossed the threshold, in terms of its applicability in vehicles?

In my mind, it was when companies like Microsoft, Baidu (China’s search engine company) and Google confirmed that machines can now recognize objects more accurately than humans can [as shown in ImageNet Large Scale Visual Recognition Challenge] earlier in 2015.

What are the latest breakthroughs?

First, machines today can learn from a much bigger database. Previously, we had to teach machines by using a limited set of data, which resulted in making machine learning too lengthy a process. Second, there is now hardware to run and implement AI applications–i.e. inference systems. [So the process of inference–recognising objects and deducting what they are—can be automated, it can function much faster.]

What hardware is best suited to implement AI applications [for autonomous cars] today?

Today, GPU is the best available hardware to implement AI. Thus far, Nvidia is the only company that offers complete software and hardware solutions necessary to test and develop AI systems.

In your opinion, will Nvidia’s Drive PX2 platform, for example, be ideal for autonomous cars?

For developing an autonomous car, yes. But no, it’s not for mass production. Not until Nvidia comes up with a new generation platform–may be Drive PX 3?–featuring power consumption below 50 watts, instead of PX 2 that consumes 250 watts of power.

Machine vision vs. AI

AI presumably does more than just distinguish a person from an animal. What else can it do?

It recognises not just one object, but multiple objects. More important, AI can give context to what it detects. It sees a pattern and it sees everything around objects. It can recognise, for example, that a person crossing the street is in fact a person looking down at his mobile phone.

Is that the difference between traditional machine vision and AI?

AI is bringing out a polarised tension among those in the automotive industry, because AI can do so much more than the standard machine vision can. Computer vision today depends on the histogram of oriented gradients (HOG) type of algorithms for object detection. I’d say 95% to 99% of vision processing done by Mobileye’s EyeQ chips are based on HOG.

But technology suppliers, tier ones and car OEMs all want AI so that their same system can continuously learn. If you had to develop new systems, chips and software from scratch every time new things that need to be recognised pop up, it would be painful. Rather, they want the system to learn them within a reasonable amount of time.

To be continued in Part 2.

Subscribe to Newsletter

Test Qr code text s ss