AI Takes Centre Stage at Hot Chips

Article By : Rick Merritt

Startup Cerebras and x86 giant Intel will disclose rival AI accelerators at the annual CPU event now rich with talks on machine learning

In a sign of the times, half of the talks at this year’s Hot Chips are focused on AI acceleration. The annual gathering for microprocessor designers once focused most of its talks on CPUs for PCs and servers.

Startups Cerebras, Habana, and UpMem will unveil new deep-learning processors. Cerebras will describe a much-anticipated device using wafer-scale integration. Habana, already shipping an inference chip, will show its follow-on for training.

Grenoble-based UpMem will disclose a new processor-in-memory, believed to be using DRAM, aiming at multiple uses. Graphcore was invited but was not ready to share more details of its chips.

The startups will compete with giants such as Intel, which is describing Spring Hill and Spring Crest, its inference and training chips based on its Nervana architecture. In a rare appearance, Alibaba will disclose an inference processor for embedded systems.

In addition, Huawei, MIPS, Nvidia, and Xilinx will provide new details on their existing deep-learning chips. Members of the MLPerf group are expected to describe their inference benchmark for data center and embedded systems, a follow-on to their training benchmark.

Organizers hope that a senior engineer from Huawei will be able to give a talk about its Ascend 310/910 AI chips. However, given that the company is in the crosshairs of the U.S./China trade war, it’s unclear whether the speaker will be able to get a visa or will be confronted with other obstacles.

Nvidia dominates the market for AI training chips with its V100. Given its market lead, it chose not to launch new silicon this year. So it will describe a research effort on a multi-chip module for inference tasks that it says delivers 0.11 picojoules/operation across a range of 0.32–128 tera-operations/second.

In an extra treat, the three top cloud-computing providers in the U.S. will host tutorials on their AI hardware. It’s rare for any of the trio to speak on the topic at events they do not host, let alone join one where their competitors are speaking.

Google will describe details of its liquid-cooled, third-generation TPU. A representative of Microsoft’s Azure will discuss its next-generation FPGA hardware. And a member of Amazon’s AWS will cover its I/O and system acceleration hardware.

In addition, a Facebook engineer will describe Zion, its multiprocessor system for training announced at the Open Compute Summit earlier this year. “Facebook and its Open Compute partners are more and more setting the standards for form factors and interconnect approaches for data center servers,” said a Hot Chips organizer.

AMD, IBM, and Intel still slugging it out

“If Rip Van Winkle fell asleep in 1999 and woke up now, he would be astounded by all the attention to machine learning and AI, which were pretty much just research topics when he started his nap,” quipped veteran microprocessor analyst Nathan Brookwood of Insight64.

But, he added, Rip would “be pretty comfortable with about half of the papers on this year’s Hot Chips agenda because they are fairly straightforward extrapolations of past conferences. Intel, AMD, and IBM are still slugging it out to get more performance out of architectures [that] Rip already knew.”

Indeed, PCs and servers still get significant attention at the event. AMD will discuss Zen2, its next-generation x86 core for both client and server systems. IBM will present a next-generation server processor, believed to be the Power 10.

AMD’s chief executive, Lisa Su, will give one of two keynotes. The head of TSMC’s corporate research group will give the other, providing insights into future process nodes.

A miscellany of other interesting talks round out the program. Tesla will provide more details on its recent disclosure of silicon for self-driving cars. In separate talks, Intel will give more details on its Optane memories and its emerging packaging technologies.

For its part, Hewlett-Packard Enterprise will describe the first chipset for GenZ, an open interface for distributed memory and storage, agnostic to the many emerging memory architectures. Separately, AyarLabs will describe its TeraPHY high-speed interconnect.

In another somewhat new wrinkle, AMD and Nvidia will close the conference with discussions of their latest GPUs geared for high-performance computing. It’s a fairly new field for the chips once focused solely on gaming, Brookwood said. And ironically, it’s the slot that Hot Chips used to reserve for server CPUs.

Subscribe to Newsletter

Test Qr code text s ss