Visual Processor IP Runs Deep Convolutional Nets in Real Time

Article By : Nitin Dahad, EE Times

Videantis launched its sixth-generation processor IP architecture, which adds deep learning capability to a solution that combines computer vision, image processing and video coding from a single unified SoC platform.

LONDON — German intellectual property supplier Videantis launched its sixth-generation processor IP architecture, which adds deep learning capability to a solution that combines computer vision, image processing and video coding from a single unified SoC platform.

The main application initially is to target the automotive industry, which is moving towards more sophisticated advanced driver assistance systems (ADAS) and ultimately to fully autonomous vehicles, which are dependent on multiple cameras.

The new v-MP6000UDX visual processing architecture can be configured from a single media processor core to up to 256 cores, and is configurable based on the company’s own programmable DSP architecture. Each core includes a dual-issue VLIW core, and provides eight times more multiply accumulates per core compared to its previous generation processor, which the company says results in 1000x performance improvement in deep learning applications, while maintaining software compatibility with its previous v-MP4000HDX architecture.

The v-MP6000UDX processor architecture includes an extended instruction set optimized for running convolutional neural nets (CNN), increases the multiply-accumulate throughput per core eightfold to 64 MACs per core, and extends the number of cores from typically 8 to up to 256.

The heterogeneous multicore architecture includes multiple high-throughput VLIW/SIMD media processors with a number of stream processors that accelerate bitstream packing and unpacking in video codecs. Each processor includes its own multi-channel DMA engine for efficient data movement to local, on-chip, and off-chip memories.

The v-MP6000UDX subsystem can have a single v-MP (media processor core), up to an array of 256 cores for embedded vision with deep learning. Source: Videantis

The v-MP6000UDX subsystem can have a single v-MP (media processor core), up to an array of 256 cores for embedded vision with deep learning.
Source: Videantis

Alongside the new architecture, Videantis also announced v-CNNDesigner, a new tool that enables easy porting of neural networks that have been designed and trained using frameworks such as TensorFlow or Caffe. The tool analyzes, optimizes and parallelizes trained neural networks for efficient processing on the v-MP6000UDX architecture. Using this tool, the task of implementing a neural network is fully automated and the company says it takes minutes to get CNNs running on the low power Videantis processing architecture.

“We’ve quietly been working on our deep learning solution together with a few select customers for quite some time and are now ready to announce this exciting new technology to the broader market,” said Hans-Joachim Stolberg, CEO at Videantis. “To efficiently run deep convolutional nets in real-time requires new performance levels and careful optimization, which we’ve addressed with both a new processor architecture and a new optimization tool. Compared to other solutions on the market, we took great care to create an architecture that truly processes all layers of CNNs on a single architecture rather than adding standalone accelerators where the performance breaks on the data transfers in between.”

Stolberg said the v-MP6000UDX architecture increases throughput on key neural network implementations by roughly three orders of magnitude, while remaining extremely low power and compatible with the company’s v-MP4000HDX architecture.

NEXT PAGE (on EE Times US): Sensor Data Crucial to ADAS  

Subscribe to Newsletter

Test Qr code text s ss