MENU

Lattice launches its own AI models for its FPGAs

Lattice launches its own AI models for its FPGAs

Technology News |
By Nick Flaherty



No longer is it sufficient to provide IP blocks and low level software drivers, now OEM customers are asking for the AI models that will run on Latticw FPGAs.

“There is an explosion of devices that are implementing some type of AI smart workloads and a lot of the time that’s driven by latency, for example for bandwidth, as well as security or privacy. Over the last three or four years you’ve seen a few devices, mostly in the cloud, but now there is an explosion of devices at the edge,” said Hussein Osman, product marketing manager at Lattice Semiconductor.  

“We are building our FPGAs on 28nm FD SOI and we have clusters of memory and compute resources sitting next to each other and that provides a very good platform for AI workloads,” said Osman.

The enhancements and new features of the Lattice sensAI v4.1 stack include reference designs with the hardware but also trained machine learning models for user presence detection, attention tracking, face framing, and onlooker detection for PC camera sub-systems.

SensAI stack has an updated neural network compiler and supports Lattice sensAI Studio, a GUI-based tool with the library of AI models that have been trained for popular use cases. This is part of the value of an FPGA that helps PC makers manage an increasingly complex supply chain, he says.

“The complexity of the ecosystem is limiting OEMs and the OEM has to deal with multiple vendors and they need to make sure they have two or three vendors to avoid being caught out by the supply chain,” he said. “That brings a lot of complexity and they also have to deal with multiple SoC vendors, so we provide them with their own silicon, its sensor and OS agnostic, that simplifies quite a bit of the complexity. We have been able to perform really well throughout the supply chain issues to make sure the experience of the designer across the portfolio is the same, and it doesn’t matter which sensor is used or which OS.”

The AI can be implemented in different ways, either using a hardware description language (HDL) for a hardwired accelerator or as instruction extensions. 

“We created a hardware accelerator in HDL on the 150MHz fabric, or we can use a Neural network compiler. The compiler is aware of the model and the IP and makes sure the model is optimised for the CNN engine – a lot of the time the application designer is focussed on hitting specific requirements such as inference speed and accuracy and we have created models that can accomplish the goals, and optimising the system as much as possible.

The key is the SensAI Studio tool. “This allows you to go through all the models to pick the ones that are interesting, pruned and optimised for our device with an AutoML tool.”

But the difference is that Lattice has developed its own trained AI model.

“The first set of applications are instant on for presence sensing, face framing and onlooker detection,” said Osman. “With presence sensing you can have 28% additional battery life by diming screen when the user is not looking at it.”

“This takes significant data collection, with the model trained fairly, avoiding bias, and testing across the world to meet our system goals, this is a huge piece of the development,” he said.

“In some instances we work with partners who already have our devices on the board, adding the AI and imaging processing, and sometimes replacing something else such as a time of flight (ToF) sensor and add other functions. This is an area we are investing heavily and will add new areas,” he said.

www.latticesemi.com

Related articles

Other articles on eeNews Europe

 

 


Share:

Linked Articles
10s