An on-chip hardware block in the Myriad X, the neural compute engine is specifically designed to run deep neural networks at high speed and low power without compromising accuracy. With the neural compute engine, says the company, the Myriad X architecture is capable of one trillion operations per second (TOPS) of compute performance on deep neural network inferences.
“We’re on the cusp of computer vision and deep learning becoming standard requirements for the billions of devices surrounding us every day, says Remi El-Ouazzane, vice president and general manager of Movidius, Intel New Technology Group. “Enabling devices with humanlike visual intelligence represents the next leap forward in computing. With Myriad X, we are redefining what a VPU means when it comes to delivering as much artificial intelligence (AI) and vision compute power possible, all within the unique energy and thermal constraints of modern untethered devices.”
Offered as being ideal for autonomous device solutions, the 8.1 x 8.8-mm Myriad X is capable of more than 4 TOPS of total performance. In addition to its neural compute engine, the device combines imaging, visual processing, and deep learning inference in real time with programmable 128-bit very long instruction word (VLIW) vector processors, increased configurable MIPI lanes, enhanced vision accelerators, and 2.5 MB of homogenous on-chip memory.
The Myriad X VPU is available in two chip packages supporting different memory accommodations. It comes with an SDK that includes a neural network compiler and “a specialized FLIC framework with a plug-in approach to developing application pipelines.”