Movidius shows neural network stick
The Fathom Neural Compute Stick along with the Fathom software framework can be used as convolutional network acceleration module.
The combination allows neural network applications to be moved from the cloud – where they are usually deployed – and deployed in end-user devices. The goal is to accelerate the development of deep learning applications on PCs with the knowledge that they can then be deployed to run on a Myriad 2 vision processor unit (VPU) contained within mobile devices such as smartphones.
“It’s going to mean that very soon, consumers are going to be introduced to surprisingly smart applications and products. It means the same level of surprise and delight we saw at the beginning of the smartphone revolution; we’re going to see again with the machine intelligence revolution. With more than 1 million units of Myriad 2 already ordered, we want to make our VPU [vision processor unit] the de-facto standard when it comes to embedded deep neural network.” Said Remi El-Ouazzane, CEO of Movidius, in a statement.
The Fathom Neural Compute Stick can run fully-trained neural networks while consuming less than 1W, Movidius said.
In January it was revealed that Movidius had extended a relationship with Google to focus on neural network technology, something that Google already uses extensively. The collaboration was focused plans to accelerate the adoption of deep learning in mobile devices (see Google’s deep learning comes to Movidius).
Now thanks to USB connectivity, the Fathom Neural Compute Stick can be connected to a range of devices and act as neural networking accelerator and improve that capability by orders of magnitude, Movidius said. The Fathom Neural Compute Stick behaves as a neural network profiling and evaluation tool, so that developers will be able to prototype deep learning systems faster and more efficiently.
Neural networks are used object recognition, natural speech understanding, and autonomous navigation for cars and differ from conventional processor in that software can be trained and learn how to perform tasks. Neural networks significantly outperform traditional approaches in tasks such as language comprehension, image recognition and pattern detection but typically require massively parallel processing to do that energy efficiently.
Fathom allows developers to take neural networks out of the PC-training phase and deploy such networks on mobile devices containing a Myriad 2 processor. Fathom supports the Caffe and TensorFlow deep learning frameworks.
Related links and articles:
News articles:
Google’s deep learning comes to Movidius
Video: Mobileye CTO on deep learning and automotive sensing
How will deep learning change SoCs?