Intel’s Movidius Myriad X VPU hosts Neural Compute Engine, for ‘AI at the Edge’

Intel’s Movidius Myriad X VPU hosts Neural Compute Engine, for ‘AI at the Edge’

New Products |
By eeNews Europe

Myriad X is presented as the first system-on-chip (SOC) with a dedicated Neural Compute Engine for accelerating deep learning inferences at the edge. The Neural Compute Engine is an on-chip hardware block specifically designed to run deep neural networks at high speed and low power without compromising accuracy, enabling devices to see, understand and respond to their environments in real time. Myriad X architecture is capable of 1 TOPS of compute performance on deep neural network inferences. [Intel enlarges ‘TOPS’ to mean ‘trillion operations per second’, presumed equivalent to Tera-Ops.]


“We’re on the cusp of computer vision and deep learning becoming standard requirements for the billions of devices surrounding us every day,” said Remi El-Ouazzane, vice president and general manager of Movidius, Intel New Technology Group. “Enabling devices with human-like visual intelligence represents the next leap forward in computing. With Myriad X, we are redefining what a VPU means when it comes to delivering as much AI and vision compute power possible, all within the unique energy and thermal constraints of modern untethered devices.”


Capable of delivering more than 4 TOPS of total performance (aggregated over all processing units), its form factor and on-board processing are aimed at autonomous devices. In addition to its Neural Compute Engine, Myriad X combines imaging, visual processing and deep learning inference in real time with:


– Programmable 128-bit VLIW vector processors to run multiple imaging and vision application pipelines simultaneously with the flexibility of 16 vector processors optimized for computer vision workloads.

– Increased configurable MIPI Lanes connect up to 8 HD resolution RGB cameras directly to Myriad X with its 16 MIPI lanes included in its rich set of interfaces, to support up to 700 million pixels per second of image signal processing throughput.

– Enhanced vision accelerators apply over 20 hardware accelerators to perform tasks such as optical flow and stereo depth without introducing additional compute overhead.

– 2.5 MB of homogenous on-chip memory comprises a centralized on-chip memory architecture allows for up to 450 GB/sec of internal bandwidth, minimizing latency and reducing power consumption by minimizing off-chip data transfer.


Myriad X, Intel continues, is the newest generation in a lineage of Movidius VPUs, which are purpose-built for embedded visual intelligence and inference. Movidius VPUs achieve significant performance at low power with the merging of three architectural elements to provide sustained high-performance on deep learning and computer vision workloads: an array of programmable VLIW vector processors with an instruction set tuned to computer vision and deep learning workloads; a collection of hardware accelerators supporting image signal processing, computer vision, and deep learning inferences; and commonly accessible intelligent memory fabric that minimizes data movement on chip.







If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News


Linked Articles