MENU

NXP supports ONNX

NXP supports ONNX

By eeNews Europe



NXP’s eIQ is a comprehensive machine learning (ML) toolkit  that helps original equipment manufacturers (OEMs) balance performance needs  and system cost when deploying neural networks and their associated inference  engines at the edge. 

ONNX is an open standard for representing deep learning models that enables  trained models to be transferred between existing Artificial Intelligent (AI)  frameworks. By importing models in the ONNX format, NXP’s eIQ enables models to  be trained in one framework and transferred to another for inference. ML developers  can then deploy inference engines across NXP’s scalable portfolio of MCUs,  high-performance i.MX RT crossover processors, and highly-integrated i.MX and  Layerscape applications processors.

“When it comes to choosing from among the many machine learning frameworks,  we want our customers to have maximum flexibility and freedom,” said Markus  Levy, head of the Artificial Intelligence Technology Center at NXP.  “An interoperable ML ecosystem is key to  driving innovation, where designers can have the freedom to develop what’s  needed for their applications. We’re happy to bring the ONNX benefits to our  customer community of ML developers.” 

ONNX, a community project created by Facebook, AWS, and Microsoft, is an  open ecosystem for interchangeable AI models that provides a common way to  represent neural network models. ONNX models are currently supported in Caffe2,  Microsoft Cognitive Toolkit, MXNet, PaddlePaddle, and PyTorch, and there are connectors  for many other common frameworks and libraries. More information on ONNX can be  found at https://onnx.ai/.

More information: www.nxp.com

Related news: https://www.eenewsembedded.com/news/nxp-launches-machine-learning-toolkit-0

 

 

 

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News

Share:

10s