Machine learning-ready SoCs from the edge to the cloud
Xilinx aims to break all barriers to adoption for its low to mid-range SoCs and MPSoCs. The company illustrated its strategy about two years ago with the launch of the SDSoC development environment based on C, C++ and OpenCL, a huge improvement but maybe not enough to address the needs of software and systems engineers with little or no hardware design expertise.
Now, the software defined development flow allows software engineers to combine efficient implementations of machine learning and computer vision algorithms into highly responsive systems, starting with a familiar, eclipse-based development environment using C, C++ and/or OpenCL languages and associated compiler technology (the SDSoC development environment) within which they can target reVISION hardware platforms, and draw from a pool of acceleration-ready computer vision libraries, and/or soon the OpenVX framework, to quickly build an application.
The reVISION stack includes a broad range of development resources for platform, algorithm and application development, including support for the most popular neural networks such as AlexNet, GoogLeNet, SqueezeNet, SSD, and FCN. Additionally, the stack provides library elements including pre-defined and optimized implementations for CNN network layers, required to build custom neural networks (DNN/CNN). This is complemented by a broad set of acceleration-ready OpenCV functions for computer vision processing. For application level development, Xilinx supports popular frameworks including Caffe for machine learning and OpenVX for computer vision (to be released later in 2017). The reVISION stack also includes development platforms from Xilinx and 3rd parties based on Zynq SoCs and MPSoCs.
For computer vision and other proprietary algorithms, users can profile their software code to identify bottlenecks and label specific functions in the code they want to speed-up and ‘hardware optimize’. A ‘system optimizing compiler’ is then used to create an accelerated implementation, including the processor/accelerator interface (data movers) and software drivers. When combining computer vision and machine learning, this compiler will create an optimized fused implementation.
Using the reVISION stack, Xilinx claims its customers get the fastest path to the most responsive vision systems, with up to 6x better images/second/watt in machine learning inference, 40x better frames/second/watt of computer vision processing, and a 1/5th the latency over competing embedded GPUs and typical SoCs.
Leveraging the unique advantages of reconfigurability and any-to-any connectivity, developers can use the stack to rapidly develop and deploy upgrades, in effect future-proofing their vision-based systems as neural networks, algorithms, sensor technologies and interface standards continue to evolve.
“We are seeing tremendous interest in machine learning from the edge to the cloud, and believe that our ongoing investment in development stacks will accelerate mainstream adoption” said Steve Glaser, SVP of Corporate Strategy at Xilinx. “Today, hundreds of embedded vision customers have realized greater than 10x performance and latency advantages with Xilinx technology. With the addition of reVISION, those same advantages will now become available to thousands of customers.” The reVISION stack will be available in the second quarter of 2017.
More information at www.xilinx.com/revision
Related articles:
Cognitive computing platform unites Xilinx and IBM
Switching from C/C++ to FPGA hardware acceleration
Xilinx’ SDNet: where software defined networks truly begin