Built with the company’s sensAI solution stack and running on low-power Lattice Nexus FPGAs, the new solutions are designed to help OEMs develop smart, always-on devices with low power, hardware-accelerated AI capabilities that are field upgradeable to support future AI algorithms. The solutions are offered as a platform for developing computer vision and sensor fusion applications that improve engagement, privacy, and collaboration for users.

For example, a client device can leverage image data from its camera to determine if someone is standing too close behind the user and blur the screen for privacy or lengthen battery life by dimming the device’s display when it “sees” the user’s attention is focused elsewhere.

“AI applications based on vision, sound, and other sensors will revolutionize the Client Computing experience,” says Matt Dobrodziej, Vice President of Segment Marketing and Business Development at Lattice. “Our sensAI solution stack supports a roadmap of Edge AI applications that make Client devices contextually aware of how, when, and where they’re being used, and our Nexus FPGAs deliver that functionality with class-leading low power consumption.”

Compute devices using an AI application developed with the sensAI solution stack and running on a Lattice FPGA have a 28 percent longer battery life in comparison to devices powering AI applications with their CPUs, says the company. The sensAI solution stack also supports in field software updates to keep pace with evolving AI algorithms and provides OEMs the flexibility to choose from different sensor and SoC technologies for their devices.

Enhancements and new features of the latest version (v4.1) of sensAI include:

  • Client Compute AI experience reference designs –
    • User presence detection to automatically power on/off Client devices as a user approaches or departs.
    • Attention tracking to lower a device’s screen brightness to conserve battery life when the user isn’t looking at the screen.
    • Face framing to improve the video experience in video conferencing applications.
    • Onlooker detection to realize when someone is standing behind a device and blurring the screen to maintain data privacy.
  • Expanded application support – the performance and accuracy gains made possible with v4.1 expand the sensAI solution stack’s target applications to include the highly-accurate object and defect detection applications used in automated industrial systems. The stack has a new hardware platform for voice and vision-based ML application development featuring an onboard image sensor, two I2S microphones, and expansion connectors for adding additional sensors.
  • Easy-to-use tools – the sensAI solution stack has an updated neural network compiler and supports Lattice sensAI Studio, a GUI-based tool with a library of AI models that can be configured and trained for popular use cases. sensAI Studio now supports AutoML features to enable creation of ML modules based on application and dataset targets. Several of the models based on the Mobilenet ML inferencing training platform are optimized for the latest Nexus FPGA family, Lattice CertusPro-NX. The stack is compatible with other widely-used ML platforms, including the latest versions of Caffe, Keras, TensorFlow, and TensorFlow Lite.

The latest version of the sensAI solution stack (v4.1) is available now and supports the company’s roadmap of AI-based applications.

Lattice Semiconductor

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News


Linked Articles