NVIDIA GTC Roundup

Article By : Rick Merritt

Jensen Huang talks autonomous vehicles, cryptocurrency, DGX-2, Moore's Law, and yet more autonomous vehicles

SAN JOSE, Calif. — Nvidia’s chief executive faced questions on Uber, China, cryptocurrencies, and more at the company’s annual GTC event here. Like most CEOs, Jensen Huang was upbeat and often turned his answers to favorite topics such as the company’s work in AI.

The event drew 8,500 registrants interested in hearing about the latest in GPU computing, especially in AI and self-driving cars.

The most sensitive questions focused on a pedestrian killed by a self-driving Uber car. Nvidia joined companies including Toyota and Volvo in suspending road tests of its robocars, but its timing was awkward. The GPU company announced its suspension a week after the accident, the day of Huang’s keynote here.

Huang said that Nvidia stopped its road tests a day or two after the Uber accident. The company had five cars on the road, said a spokesman.

“What happened is tragic … we should give Uber the chance to understand what happened … everybody in the industry should just take a pause from testing … there may be new data as a result of the accident, and as good engineers we should wait to see if we can learn something. It won’t take long,” he said, responding to multiple questions in press Q&A sessions.

Meanwhile, development must continue for self-driving cars, a market still two to three years away from revenue, said Huang, citing Nvidia’s new road test simulators.

“As a result of what happened last week, the amount of investment is going to go up … the best way forward is not to discourage the work … none of our customers have slowed down development … inside our company, development is continuing at full speed,” he added.

NVIDIA and Toshiba autonomous vehicle roofs

Nvidia’s test car electronics (top) looked nerdier than the deliberately sleek industrial design of Toshiba’s test vehicles (bottom). Images: EE Times

Nvidia was silent on the roadmap for its flagship Volta GPU, but it did sketch out the next couple of steps for its Drive platform for robocars.

It expects to be in production by the end of the year with Xavier, its inference accelerator made up of 9 billion transistors and described as the equivalent of four prior generation Parker chips. Four Xaviers will populate a 300-W Pegasus motherboard also shipping late this year.

Nvidia gave the name Orin but no other details for a follow-on chip shown on its roadmap below.

NVIDIA Drive Roadmap

The roadmap for Nvidia’s Drive platform. Click to enlarge. Source: Nvidia.


Little to say about China and cryptocurrency

Huang largely sidestepped questions about the impact of a possible trade war between the U.S. and China. Making the case for globalization, he noted that Nvidia does about a third of its business in China and has a few thousand employees and many partners there.

“To think of a company as being from any one country doesn’t make sense … we all are interdependent on each other, so we have to find a way to live together,” he said.

Likewise, he did not directly address the extent to which the rise of cryptocurrencies is responsible for high-end consumer products such as its Titan V being out of stock. He noted that bitcoin mining is generally run on ASICs, while Ethereum is deliberately focused on GPU mining. It represents just one sector of demand along with gaming and high-performance computing, he said, providing no details.

Huang spent most of his time talking about machine learning. The types and complexity of neural networks is exploding from “eight layers and a few million parameters five years ago to hundreds of layers and billions of parameters” now.

 

 NVIDIA Neural Networking types

Huang showed an example of what he called a “Cambrian explosion” in neural networking. Click to enlarge. Source: Nvidia.


A look inside Nvidia’s DGX-2 GPU server

NVIDIA DGX-2 GPU Server

On the show floor, Nvidia gave a look inside its DGX-2 (above), the star of this year’s GTC. The 10-kW system packs two boards with eight 32-GByte V100 GPUs each.

The GPUs are linked by a dozen 100-W NVSwitch chips (below). Each chip supports 18 NVLink 2.0 interfaces (bottom).

The system will cost a whopping $399,000 when it ships in the fall. Huang said that it will mainly serve a relatively small group of researchers and scientists. Data center operators tend to design their own streamlined GPU servers.

NVIDIA NVSwitch

NVIDIA NVLink


Expanding to new markets via AI

Huang announced two new platforms to drive its GPUs into new markets. Isaac (below) is a robot development platform based on Nvidia’s Jetson board.

Separately, Clara is a software platform that networks an Nvidia GPU server to legacy medical imaging equipment. Huang showed it enhancing a monochrome 2D ultrasound video of a heart into a color 3D version. The imaging work relies, in part, on Nvidia’s efforts getting containers, virtualization software, and Kubernetes running on its GPUs.

Nvidia’s effort to open-source its Xavier inference processor holds perhaps the greatest promise for expanding its traditional chip business. Its deal with Arm to support the IP could broadly expand use of Nvidia’s TensorRT compiler for inference jobs.

“IoT AI SoCs will be a new category of chips, and with Arm, we make it simple,” said Huang.

 

NVIDIA Isaac


Bullish on Moore’s law and the 7-nm node

NVIDIA powered Einride autonomous truck

Startup Einride (Stockholm) brought its Nvidia-powered robo-truck to the GTC show floor.

When it comes to the future of semiconductors, Huang likes to maintain that GPUs are making faster strides than CPUs. However, he was quick to express optimism about the underlying chip technology despite Qualcomm’s concerns of diminishing returns expressed last week.

“We have a tech horizon of 10 years, and for the next 10 years, we are super-satisfied [that] we will continue to get semiconductor advances,” said Huang.

“We can put to good use more transistors in a given reticle size like no one else can because we are in the parallel computing business … Moore’s law is still working for us … we also benefit tremendously from new architectures because the [AI] algorithms are so complex … our observation is [that 7 nm is] going to be a great node.”

NVIDIA roboracer

Just for fun, Nvidia showed off its self-driving race car.

— Rick Merritt, Silicon Valley Bureau Chief, EE Times

Subscribe to Newsletter

Test Qr code text s ss