Imagination Updates Neural Network Accelerator

Article By : Nitin Dahad, EE Times

Delivers over 160 TOPs, programmable extensibility

LONDON — Imagination Technologies has launched an update to its neural-network accelerator (NNA) with its new PowerVR Series 3NX architecture, which can deliver over 160 tera operations per second (TOPS) in multicore designs and provide programmable extensibility.

The new PowerVR Series3NX features architectural improvements over the previous-generation PowerVR Series2NX that include lossless weight compression, security enhancements, and multicore support. In addition, programmable extensibility is provided in the form of the PowerVR Series3NX-F (flexible) IP configuration, enabling additional functionality and flexibility using a new compute SDK. Using this capability, customers can differentiate and add value to their offerings through the OpenCL framework.

A single Series3NX core scales from 0.6 to 10 TOPS, while multicore implementations can scale beyond 160 TOPS. The new NNA features a 40% boost in performance in the same silicon area over the previous generation, giving SoC manufacturers a nearly 60% improvement in performance efficiency and a 35% reduction in bandwidth.

The architecture enables SoC manufacturers to optimize compute power and performance across a range of embedded markets such as automotive, mobile, smart surveillance, and IoT edge devices. Imagination hopes that with the flexibility and scalability, combined with near-doubling in top-line performance, it can further enable mass AI adoption in embedded devices.

“There are tremendous opportunities to apply AI at the edge to create devices that are more capable, more autonomous, and easier to use,” said Jeff Bier, founder of the Embedded Vision Alliance. “In many of these applications, a key challenge is achieving the right combination of processing performance, flexibility, cost, and power consumption.”

New PowerVR tooling extensions can optimally map emerging network models, offering an ideal mix of flexibility and performance optimization. With Imagination’s dedicated deep neural network (DNN) API, developers can easily write AI applications targeting Series3NX architecture as well as existing PowerVR GPUs. The API works across multiple SoC configurations for easy prototyping on existing devices.

Imagination launched the previous generation of its NNA, the PowerVR Series2NX, in 2017. The company said that it has sold multiple licenses across multiple markets, though it was unable to give customer specifics. Neal Forse, senior director of product management for Vision and AI at Imagination Technologies told EE Times that the company has sold licenses into the mobile market and will be announcing an automotive customer shortly. To date, it has been licensed by multiple customers, predominantly focused in the mobile and automotive markets.

The company also announced three new GPU cores — the PowerVR Series9XEP, Series9XMP, and Series9XTP — which range from entry-level to high-end and incorporate efficiency improvements and new features to deliver additional performance. The GPUs all incorporate PVRIC4, the new generation of Imagination’s image-compression technology that enables random-access visually lossless image compression, ensuring bandwidth and memory footprint savings of at least 50% and enabling systems to overcome performance bandwidth constraints. Other features include a new alpha buffer/block hint feature for reduced composition workload bandwidth and cost.

Designers can pair the new GPUs with the PowerVR Series3NX NNA in the same silicon footprint to enable vision and pre-processing algorithms via the GPU and highly optimized fixed-point neural-network processing with the NNA.

Subscribe to Newsletter

Test Qr code text s ss