Low power in-memory AI for vision applications

Low power in-memory AI for vision applications

Technology News |
By Nick Flaherty

Analog Devices is showing a vision AI application using its MAX78000 microcontroller with a custom Convolutional Neural Network Accelerator with a compute-in-memory architecture.

The visual servo application enables vision AI inferences using just microjoules of energy and millimeters of board space with a low power camera with a resolution of 169 x 120 and running at 4 frame/s. The 50MHz max78000 includes 64 cores developed in house and the 200uJ energy profile allows the chip to run from coin cell at under 1mA. 

Robotic applications demand a higher level of intelligence and performance that can only be achieved with AI. However, inferences in the cloud are expensive, prone to high latency, and consume large amounts of energy.

Maxim launches neural network accelerator chip

Shifting processing to the edge – closer to where data is collected – allows robots to sense and interpret data quickly and reliably, while also becoming more localized and self-contained. Embedding AI into the sensors themselves eliminates the need to send data to the cloud, reducing latency and costs while increasing privacy. This enables an entirely new class of battery-powered AI and embedded robotics applications.

The camera module as attached to a Robotis OpenMANIPULATOR-X robotic arm in an eye-in-hand configuration that allows the arm to detect and track a visually tagged object, even readjusting itself on the fly to a dynamic environment. Local image processing provides embedded intelligence and responsiveness, with micro-ROS support for custom applications.

The AI is trained using PyTorch and a custom tool converts to C code to flash to device


If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News


Linked Articles