Machine learning may be Intel’s critical inflection point

Article By : Rick Merritt

Intel dominates the high margin market for server processors, but machine learning demands more performance on highly parallel tasks.

Intel is demonstrating two new versions of its Xeon processor and a new FPGA card for deep learning. The demos, however, are just the tip of the iceberg for a soup-to-nuts offering in machine learning the company will announce at an event on November 17.

The industry has been waiting to hear Intel’s plan for machine learning, one of the hottest areas to emerge in semiconductors in recent years. “I think [Intel CEO] Brian Krzanich will bet the company,” said Nigel Toon, chief executive and co-founder of Graphcore, a start-up with its own AI processor that recently snagged $30 million from investors, including Intel archrival Samsung.

Intel’s been making a few investments of its own. In recent weeks, the PC giant acquired two hot processor start-ups in neural networking—Nervana and Movidius. They add to Intel’s $16.7 billion acquisition of Altera whose FPGAs are already being used to accelerate search, networking and other jobs in the data centers of Baidu and Microsoft.

Meanwhile, Intel continues to position Xeon Phi, a massively multicore x86 processor, as its key weapon against graphics processors from Nvidia and AMD. At IDF in August, it said the Knights Mill version of Phi, the first to act as both host and accelerator, will ship in 2017.

Machine learning presents what former Intel CEO Andy Grove might have called a critical inflection point. Intel dominates the high margin market for server processors, but machine learning demands more performance on highly parallel tasks than those chips offer.

[Intel Arria accelerator (cr)]
__Figure 1:__ *Intel debuts new Arria 10 PCIe card for deep learning. (Source: Intel)*

Google is already using its own ASIC to accelerate the kind of inference tasks in machine learning Intel targets with a new PCI Express card using an Altera Arria FPGA. Facebook designed its own GPU server using Nvidia chips for the computationally intensive job of training neural networks.

Meanwhile, Nvidia launched its own GPU server earlier this year, and IBM and Nvidia collaborated on another one using Power processors. For its part, AMD rolled out an open software initiative for its GPUs earlier this year.

All are vying for sockets in a deep learning hardware market forecast to grow from $436 million in 2015 to $41.5 billion by 2024, according to market watcher Tractica. The potential is fueling a tactical rise in investments in semiconductor start-ups such as Graphcore, Wave Computing and Cornami in addition to the two Intel bought this year.

“Nervana was bought for the software—I don’t think the hardware was strong enough to push forward,” said Toon of Graphcore, although one Intel manager suggested Intel will aim the Nervana hardware at the enterprise market for private clouds.

“Movidius is well aligned to [Intel’s] RealSense [3D camera] line for low power systems at the edge of the network and the Internet of Things,” Toon said.

“I’m not sure how Altera fits to be honest,” said Toon who joined the FPGA maker in 1988, helping it launch and run its business in Europe.

“Intel may try to use FPGAs as machine learning accelerators in some way but having been involved in FPGAs, I don’t think that’s the right approach,” he said, noting his multicore chips will be more powerful than either FPGAs or GPUs for deep learning.

 
Next: Intel ‘working on’ support for 7 machine learning frameworks »

Subscribe to Newsletter

Test Qr code text s ss