NXP Semiconductors' small world of machine learning

July 05, 2018 //By James Morra
NXP Semiconductors' small world of machine learning
Only two years ago, NXP Semiconductors seemed behind in the artificial intelligence race. After Qualcomm announced it would buy the company for over $40 billion, NXP’s chief executive Richard Clemmer admitted it was not yet working on machine learning. It also needed more processing power to compete with Nvidia in supplying the brains of driverless car prototypes.

The Eindhoven, Netherlands-based company changed its strategy as new applications for the technology emerge. Last month, it introduced a new software tool that lowers the bar for customers to integrate machine learning into consumer electronics, factory equipment and cars. The software can improve how efficiently its embedded chips can handle inference jobs.

This is only the first act, though. The company will integrate scalable artificial intelligence accelerators in its chips in 2019, said Gowri Chindalore, NXP’s head of technology and business strategy, in a recent interview. NXP is currently weighing whether to build an accelerator from scratch or license another company’s cores to enter the market faster, he said. The deliberations have not been previously reported.

NXP is under pressure to show customers that embedded chips can handle machine learning tasks without being shackled to the cloud, where training and inference typically occur. The benefits include lower latency and tougher security—which are critical for applications like autonomous driving and industrial robots—as well as conserving power normally used to communicate with the cloud—critical for anything battery-powered.

The latest tool in the company's EdgeScale platform is designed to compress machine learning models. The resulting inference engine can run inside the graphics processing units and digital-signal processors in its Cortex-A chips, which include the i.MX and LayerScape product lines. Using the software, customers can store algorithms trained with TensorFlow in the cloud and automatically put them inside chips out in the field.

“The platform is meant to help our customers learn about artificial intelligence,” said Martyn Humphries, NXP’s vice president of consumer and industrial i.MX processors, in an interview. Each product has tradeoffs in terms of accuracy, power consumption and speed. The software spots the tradeoffs so that customers can choose the processor that works best for their application.

“Simplicity and device-specific optimization will be critical for broader adoption in the embedded market given how fragmented the hardware industry is,” Chris Rommel, executive vice president of market research firm VDC Research, told Electronic Design. “And what is exciting to me is to think about how that could ultimately extend to their broader portfolio.”

Vous êtes certain ?

Si vous désactivez les cookies, vous ne pouvez plus naviguer sur le site.

Vous allez être rediriger vers Google.