Imagination IP in Chinese AI integration

February 01, 2021 // By Nick Flaherty
Imagination IP in Chinese AI integration
The IMGDNN application programming interface from Imagination Technologies has been integrated into Baidu’s Paddlepaddle AI tools

Imagination Technologies has teamed with PaddlePaddle to integrate Imagination’s IMGDNN API has been integrated into Paddle-Lite artificial intelligence (AI) ecosystem

Paddle-Lite is part of Baidu’s deep learning framework PaddlePaddle (Parallel Distributed Deep Learning), and the IMGDNN API enables developers to target PowerVR-architecture-based graphics processing units (GPUs) and neural network accelerators (NNAs)

The open-source technology is China's first self-developed industrial deep learning platform and includes core training and inference frameworks, application-based model banks, end-to-end development kits, and various kinds of toolkits from visualization purpose to third party model hub. Up to now, PaddlePaddle has over 2.65 million developers around the world, serving 100,000+ enterprises, and more than 340,000 models have been created.

The move increases the number of frameworks supported by Imagination’s hardware platforms and enables developers to run AI applications inside heterogeneous systems, such as those containing CPUs, GPUs and NNAs, more easily.

Developers can now take advantage of the full PaddlePaddle toolset, including PaddleSlim, a compression tool for optimising pre-trained neural network models and converting them so they are suitable for execution on Imagination’s NNA. The code has already been merged into the PaddlePaddle’s development branch and will appear in the next major stable release V2.8.

“It is exciting to be collaborating with Imagination in the AI space,” said Yunkai Wang, Ecosystem Product Owner at PaddlePaddle. “PaddlePaddle’s Paddle-Lite inferencing engine can support multiple types of hardware, operating systems, and AI models comprehensively and the successful integration with Imagination’s hardware enriches our ecosystem further. We look forward to working with Imagination in the future to accelerate AI innovation."

The NNA IP blocks are highly-scalable and are specifically designed to accelerate machine learning workloads at the edge. It is silicon-proven and has been licensed into markets including automotive, mobile, AIoT and datacentre/desktop. The latest Series4 multi-core can scale to 500 TOPS and beyond. Architectural highlights include Imagination Tensor Tiling to reduce bandwidth needs as well as mature software and tools.

"We


Vous êtes certain ?

Si vous désactivez les cookies, vous ne pouvez plus naviguer sur le site.

Vous allez être rediriger vers Google.