MWC: Mobile AI Heating Up

Article By : Junko Yoshida, EE Times

Mediatek equips SoCs with neural engines, targets "new premium" segment of devices

BARCELONA — While Apple and Samsung, both armed with home-grown apps processors, have a lock on the premium smartphone market, MediaTek, seeking to rebound in smartphones, is rolling out at the Mobile World Congress its Helio P60 chipset.

MediaTek’s plan is to re-enter the mid-upper tier smartphone market where it competes with Qualcomm.

MediaTek is pitching Helio P60 as “the first SoC platform featuring a multi-core AI processing unit (mobile APU) and MediaTek’s NeuroPilot AI technology.”

MediaTek’s move highlights a sharp shift in focus — in the industry’s smartphone battle — to mobile AI. Various chip vendors are racing to make neural network engines locally available on handsets. The goal is simple. They want to enable the AI experience — voice UIs, face unlock, AR and others– processed on client devices, faster and better, with or without network connection.

“We only in the last year have seen the first wave of smartphone processors with embedded neural engines, and those were all in flagship processors like the Apple A11, Huawei’s Kirin 970, Qualcomm’s Snapdragon 835 and MediaTek’s Helio X30,” said Mike Demler, senior analyst at the Linley Group.

Demler said, “We’re not surprised that MediaTek would add a neural engine in the lower tier, but it’s interesting that they’re doing it with a more powerful core than the company’s flagship X30 has.”

In other words, a community of vibrant mid-tier smartphone vendors — mostly driven by Chinese handset manufacturers, appear impatient. They want to pounce on the mobile AI trend as soon as possible.

New premium
MediaTek has defined what it calls “new premium,” as “devices that offer premium performance and features at a mid-range price.” Finbarr Moynihan, general manager, corporate sales at MediaTek, explained to EE Times “new premium” is where all the action is in smartphones today. Mid- to upper-tier players, such as Oppo, Vivo, Lenovo, etc. are eager to close the gap with their top-tier rivals, in hopes of making a big leap in apps, features and AIs.

MediaTek told us that 48 percent of global smartphone shipments in 2017 were from Chinese OEMs, largely aimed at emerging markets. MediaTek quoted a TrendForce report, pointing out that brands focusing on mid-range consumers saw huge growth in 2017 with Xiaomi reporting a remarkable 76 percent increase in smartphone production, and significant increases from Transsion, OPPO and Vivo.

Helio P60 features four Arm A73 processors and four Arm A53 processors in an octa-core CPU complex. Based on a big.LITTLE octa-core design, MediaTek claims 70 percent CPU performance enhancement compared to its predecessors, Helio P23 and Hleio P30. By using a new Mali G72 GPU that maxes at 800MHz, the P60 also improves GPU performance by 70 percent.

MediaTek’s neural network engine
Helio P60’s claim to fame, however, is a built-in NeuroPilot AI platform that bridges CPU, GPU and onboard AI accelerators. MediaTek’s AI framework is there to manage heterogeneous AI-compute architecture by coordinating computing workload across CPU, GPU and the AI accelerator within the SoC to maximize performance and energy efficiency.

MediaTek has confirmed that the P60 integrates a Cadence Vision P6 core for its AI accelerator.

Cadence Vision P6

Cadence Vision P6 (Source: Cadence)

Compared to MediaTek’s flagship Helio X30, which used Cadence Vision P5 at 70 GMAC per second (8-bit), Helio P60 does 280 GMAC per second. Demler said, “So they dropped down a tier as far as the overall processor’s performance, but increased neural-engine performance by 4x at the same time.”

Asked to compare the performance of Helio P60’s neural network engine, Demler said, “Huawei’s Kirin 970 does ~1TMAC/s (FP16), so it has 4x the neural-network performance of P60 at higher resolution. At 280GMAC/s, the P60 is a close match for the Apple’s A11, which does 300GMAC/s.”

No AI benchmarks
Most analysts we consulted, however, agreed that the lack of a benchmark for deep-learning accelerators makes it nearly impossible to make any meaningful comparison. Calling it “a big open issue,” Demler said the mobile-AI quagmire could easily lead us to “a GOPS/TOPS battle of marketing hype.”

Jim McGregor, principal analyst at Tirias Research, concurred. “This is a confusing topic because there are few details and no benchmarks,” he said. “MediaTek and others make it sound like these AI solutions can do anything,” but they are not usually true, McGregor added.

For example, the Cadence Vision P6 core used in MediaTek’s P60 is optimized for computer-vision applications, not general-purpose neural networks, Demler said.

As McGregor explained, “First, you need to understand what most of these AI processors are.” For example, MediaTek, Apple and Huawei call their solutions “dedicated.” That means they use a single IP block for AI acceleration. “In most cases, that means an IP block licensed from someone else” such as Cadence or Ceva. Such an IP block “supports a configurable neural network with some limitations,” said McGregor. But “no one will tell exactly what those limitations are.”

So, obviously, dropping inside an app processor a neural networks engine isn’t the end of the story. As McGregor pointed out that the development and training of new neural networks still need to take place in data centers where they must depend on much more high-precision, powerful processors for training.

If app developers and OEMs want to exploit the neural engines inside a smartphone app processor, they need a software framework with hooks to the underlying hardware. “All the leading mobile-processor designers (Qualcomm, MediaTek, Huawei, Apple) now offer neural-network SDKs,” Demler observed. But they all need to support popular training frameworks like Caffe and Torch, he added.

In MediaTek’s case, the company offers what it calls NeuroPilot AI SDK, a framework that lets app developers and OEMs “look down into hardware, to see how AI apps can run on CPU, GPU and dedicated AI accelerator,” said MediaTek’s Moynihan.

Meanwhile, apps developers and OEMs also need to be able to “look up, and to see what Android Networks API (Android NNAPI) says,” Moynihan added. Google developed Android NNAPI and runtime engine for Android-based machine learning. “MediaTek’s NeuroPilot SDK is fully compliant with Android NNAPI,” Moynihan added.

System Architecture for Android Neural Networks API

System Architecture for Android Neural Networks API (Source: Google)

Among methods deployed to enable smartphone processors to run AI apps, Qualcomm appears to have a slightly different approach

McGregor said Qualcomm’s solution is different because “they use multiple resources already on their chip, including the Hexagon DSP, Adreno GPU, and Kryo CPU cores.”

However, he added, “With no benchmarks available, it is impossible to determine which method is better, but the Qualcomm model does offer more flexibility.”

Battle for AI software
Regardless of the underlying hardware, it’s after all the software that can truly differentiate the AI experience on any given smartphone.

McGregor said, “Right now, these applications are being targeted towards common functions on the phone, such as photography and digital assistants. However, it is often left up to third-party software developers to develop and train the model for use on the device.”

He noted, “In limited cases, some models or libraries are available. Qualcomm developed some libraries around image recognition, Samsung around photography, and I’m sure Apple is developing its own models.”

In other cases, it is up to the applications developer, which is a significant limitation, McGregor pointed out. “Not many application developers are accustomed to deep learning or have access to large data centers necessary for deep learning,” he said.  

The Linley Group’s Demler also sounded a note of caution on AI software development in his recent Microprocessor Report. “The diversity of processor architecture creates a challenge for developers of Android apps, because these apps must work even on devices that lack a dedicated deep learning accelerator.” On the other hand, developers of iOS apps need only support a few Apple designed processors, he noted.

Similarly, Kevin Krewell, principal analyst at Tirias Research, warned, “The biggest problem I see is that each silicon and IP vendor is doing Machine Learning differently. Arm may have the best opportunity to standardize multiple vendors on one IP.”

— Junko Yoshida, Chief International Correspondent, EE Times

Subscribe to Newsletter

Test Qr code text s ss