Blog: Stuck in the Starting Blocks

Article By : Alan Patterson

As the FANGs design more and more of their own custom chips for AI, they're getting ahead of traditional semiconductor designers and manufacturers. Something's got to give.

Hyperscale companies rule the electronics world. The so-called FANGs (Facebook, Amazon, Apple, Netflix, Google, plus Alibaba, Tencent, Baidu, and Microsoft) far outweigh the world’s biggest chipmakers by whatever metric you choose. During the next ten years, we can be sure that these giants will shape the industry in a way that few of us can imagine.

That’s evident in the relationship between Apple and TSMC. Apple accounts for about a fifth of the sales at the world’s largest foundry and more than half of its 7nm capacity.

As more FANGs design chips customized for AI and high performance computing, this trend is likely to transform the semiconductor industry and put more pressure on the traditional players in semiconductors.

Two years ago, Google announced that its Tensor Processing Unit (TPU) beat Intel’s Xeon and Nvidia GPU in machine-learning tests by more than an order of magnitude. Google made the TPU at TSMC using the foundry’s 28-nm technology.

About a year ago, Amazon announced its Graviton chips to power customers’ websites and other services. That chip was also made at TSMC using a 28-nm process.

The Gravitron runs circles around off-the-shelf chips from Intel or AMD that are designed for general applications. The Amazon chip runs very efficiently in the company’s environment, helping cut costs. Customers with access to Graviton-powered servers have pared expenses for some services by half.

Now, Facebook is also designing its own chips.

Companies such as Intel, AMD and Nvidia must be alarmed that the FANGs are stepping on their turf.

To be sure, the FANGs aren’t likely to go on an acquisition spree and trigger another wave of buyouts in the semiconductor industry. The internet giants are already under scrutiny for antitrust violations in Washington, D.C.

But they might try other under-the-radar tactics such as poaching talent from the chipmakers. They certainly can afford it.

The FANGs have been working with chip-design houses such as Global Unichip, which is dedicated to TSMC. They could easily use their ties to the Taiwanese fabless company, for example, to recruit key people.

So far, most of the AI chips designed by the FANGs are for data centers. Earlier this year, Qualcomm also announced plans to enter this business, which is forecast to be worth $17 billion by 2025.

Intel is expected to unveil its Nervana NNP-L1000 later this year, but the performance of the AI chip is likely to lag behind the latest Nvidia's data center GPUs.

Recognizing images and speech and processing big data is stretching the functionality of chips away from personal computing. AI chips help reduce the consumption of power in data centers, which up to now has been doubling annually, an unsustainable rate.

While more than 40 companies worldwide are developing AI-specific accelerators, most are designing chips for inference, not model training, where Nvidia dominates the multi-billion market.

But what happens when more companies make AI silicon for edge devices? Not much has happened yet, but the time is coming. AI on the edge will be a much larger market than AI in data centers, according to most experts.

“There are still many unanswered questions about how the compute capability necessary to run edge AI applications will actually be deployed in the real world, according to Steve Roddy, vice president of Arm’s machine learning group. In an automobile, there may be multiple distributed systems or one centralized system, he notes. In a factory, there may be wireless connections from every smart device back to a centralized CPU or distributed computing.

Roddy points to the same issue with cities: how dense will the compute be in the 5G networks, versus how centralized? “One of the goals that Arm has is to create the systems in the middleware layers that will allow applications to move seamlessly from different compute models as those models evolve over time.”

Clearly Arm plans to be one of the key players in distributed AI. So does Google.

“Sensors that are able to do smart things, like voice interfaces or accelerometers, are going to become so low-power and so cheap that they’re going to be everywhere,” says Pete Warden, leader of Google’s TensorFlow Mobile/Embedded team.

“Computational devices powered by AI will touch our lives in almost every conceivable way,” says Byron Reese, the author of AI at the Edge: A GigaOm Research Byte. “The power, security, and speed requirements of these devices necessitate inference be performed at the edge, where the data is collected. This will enable an ever-more- common way of scaling the digital devices that will come to play a role in our lives.”

ST, Xilinx and a number of other traditional chipmakers aim to address the market for inference at the edge, but they are still at the threshold of this business that promises to far exceed the market for AI in data centers.

The traditional players are standing at the starting line with a group of much larger newcomers. In this case, I'm betting the newcomers take the race.

Subscribe to Newsletter

Test Qr code text s ss