Market for Cloud AI Chipsets to Double in Next 5yrs

Article By : Sally Ward-Foxton

The opportunity for AI accelerator chips is much-hyped, but who is actually selling chips today?

The opportunity for AI accelerator chips is much-hyped, but how big is the market, and which companies are actually selling chips today?

Two new reports from ABI Research detail the state of play for today’s AI chipset market. EETimes spoke to the reports’ author, Principal Analyst Lian Jye Su, to gain some insight into which companies and technologies are making inroads into this potentially lucrative market.

AI in the Cloud 
The first report, “Cloud AI Chipsets: Market Landscape and Vendor Positioning,” highlights how cloud AI inference and training services are growing rapidly. The resulting AI chipset market is expected to grow from US$4.2 billion in 2019 to US$10 billion in 2024. Nvidia and Intel, the current leaders in this space, are being challenged by companies including Cambricon Technologies, Graphcore, Habana Labs and Qualcomm. 

According to Su, Nvidia is still a clear leader in this market, largely down to its well-established developer ecosystem and its first-mover advantage.

“Also, as AI models, libraries and toolkits constantly change and update, Nvidia serves as a good fallback option, given its capability as a general-purpose AI chipset,” Su said. “Granted, these advantages will slowly diminish as the market matures, but Nvidia will still be in a strong position for at least the foreseeable future.”

Lian Jye Su

Lian Jye Su (Source: ABI Research)

Today’s cloud market for AI chipsets is broken into three segments. The public cloud is hosted by cloud service providers: AWS, Microsoft, Google, Alibaba, Baidu, and Tencent and others. Then there are enterprise data centres, which are effectively private clouds, plus what ABI calls “hybrid cloud,” meaning offerings that combine public and private clouds (VMware, Rackspace, NetApp, HPE, Dell).

The report also identified an additional, emerging segment – Telco clouds, which refers to cloud infrastructure deployed by telecom companies for their core network, IT and edge computing workloads.

This new segment represents a big opportunity for AI chipset makers, Su said.

“We are already seeing network infrastructure vendors like Huawei, and to a lesser extent Nokia, rolling out ASICs that are optimized for telco network functions,” Su said. “It is a huge market, but Nvidia has been pushing very hard recently to get into this market.”

AI Chipset Revenue

Total annual revenue from AI chipset sales, 2017 to 2024 (Source: ABI Research)

While Su doesn’t see any other company unseating Nvidia’s dominance over AI training in the cloud any time soon, inference is more of a free-for-all, which is not dominated by a single player at present. This is partly down to the nature of inference’s workload, which differs between verticals. ASICs are expected to see strong growth in this sector from 2020 onwards, he said.

The current trend for moving AI inference to edge devices will mean less reliance on the cloud from devices such as smartphones, autonomous vehicles and robots. But this does not mean that the inference workload, which some cloud service providers deem to be larger than training workload, is going to diminish, Su said.

“Some AI will never move to the edge, such as chatbots and conversational AI, fraud monitoring and cybersecurity systems,” he said. “These systems will evolve from rule-based to deep learning-based AI systems, which actually increases the inference workload. [The increase] will be more than sufficient to replace those inference workloads that move to the edge.”

And then, there is Google. Google’s TPU (tensor processing unit), which can address both training and inference in the cloud, is seen as a strong challenger for CPU and GPU technologies (led by Intel and Nvidia, respectively). As the report notes, Google’s success with the TPU has provided a blueprint for other cloud service providers (CSPs) to develop their own AI accelerator ASICs. Huawei, AWS and Baidu have already done so.

If the cloud service providers are all working on their own chipsets, does that leave a market for any other chipset suppliers in that sector?

“You are certainly right that this path has been very challenging for newcomers as CSPs start to work on their own chipsets,” Su said. “We even forecast that 15% to 18% of the market will fall under the CSPs by 2024. The opportunity lies more in the private data centre space. Banking institutions, medical organizations, R&D labs and academia will still need to run AI and they will be considering chipsets that are more optimized for AI workloads, which gives newcomers like Cerebras, Graphcore, Habana Labs and Wave Computing some advantages.”

Other players that will benefit from these trends are IP core licensing vendors, like ARM, Cadence and VeriSilicon, where they will be responsible for chipset design for even more chipset-developing enterprises than before, Su said.

AI at the Edge
ABI’s second report, “Edge AI Chipsets: Technology Outlook and Use Cases,” puts the edge AI inference chipset market at US$1.9 billion in 2018. The report also identified, perhaps surprisingly, a market for training at the edge, which it placed at US$1.4 million for the same year.

Which applications are doing training at the edge today? Su explained that this figure includes gateways (historians or device hubs) and on-premise servers (in private clouds, but geographically located where the AI data is generated). Chipsets designed for training tasks on on-premise servers include Nvidia’s DGX, Huawei’s gateways and servers which feature their Ascend 910 chipset, and system-level products targeted at on-premise data centres from the likes of Cerebras System, Graphcore and Habana Labs.

“This [training at the edge] market will remain small, as cloud is still the preferred location for AI training,” Su said.

AI Chipset Revenue by Inference and Training

Total annual revenue from AI chipset sales, by inference and training, 2017 to 2024 (Source: ABI Research)

AI inference at the edge, meanwhile, is responsible for the bulk of edge AI’s 31% CAGR estimated between 2019 and 2024. For edge inference, Su described three main markets (smartphones/wearables, automotive, smart home/white goods) plus three niches.

The first niche, robotics, typically requires a heterogeneous compute architecture as robots rely on many types of neural networks, such as SLAM (simultaneous location and mapping) for navigation, conversational AI for human-machine interface and machine vision for object detection, all of which use CPUs, GPUs and ASICs to varying degrees. Nvidia, Intel and Qualcomm compete heavily in this space, he said.

“The second niche is smart industrial applications, including manufacturing, smart building, and the oil and gas sector,” he said. “We are seeing FPGA vendors excelling in this space due to legacy equipment, but also due to FPGAs’ architecture, [which provides] flexibility and adaptability.”

Finally, there is the “very edge,” the trend towards embedding ultra-low power AI chipsets into sensors and other small end nodes on wide area networks. Given the focus on ultra-low power consumption, this space is populated by FPGA companies, RISC-V designs and ASIC vendors.  

Who is actually getting design-ins for AI inference at the edge so far?

“Surprisingly, or not surprisingly, due to the large shipment volume of smartphones, smartphone AI ASIC vendors are actually in the lead for the edge AI chipset market,” Su said. “This refers to Apple, HiSilicon, Qualcomm, Samsung and to a lesser extent MediaTek. However, if we talk strictly about startups, I think Hailo, Horizon Robotics and Rockchip seem to be gaining some momentum with end device manufacturers.”

Su also said that software will be critical for commercial implementation and deployment of edge AI chipsets, comparing Nvidia’s ongoing efforts to upgrade its compiler tools and build a developer community to the approach taken by Intel and Xilinx, which was to collaborate with or acquire startups that have software-based acceleration solutions.

“Chipset companies should consider offering toolkits and libraries to the developer community, as well as developer education programs, contests, forums, and conferences, as these will entice developers to work with the chipset companies and develop relevant applications. All these are not easily achievable by new startups,” he said.

The report concludes that in addition to the right software and support for the developer community, successful companies in this space will also provide a good development roadmap, supported by the rest of the technology value chain. They will also need to generate scale for their chips across various use cases, while maintaining competitive price points.

Subscribe to Newsletter

Test Qr code text s ss