MENU

BrainChip joins Intel Foundry Services to drive Neuromorphic AI

BrainChip joins Intel Foundry Services to drive Neuromorphic AI

Business news |
By Jean-Pierre Joosting



BrainChip Holdings Ltd, a commercial producer of ultra-low power neuromorphic AI IP, has announced that it has become a member of the Intel Foundry Services (IFS) ecosystem alliance to help advance innovation on Intel’s foundry manufacturing platform.

BrainChip is the latest industry-leading IP partner to join the IFS Accelerator — IP Alliance.  Partners in this alliance collaborate with IFS to enable designers to access high-quality IPs, supporting their design needs and project schedule, while optimizing for performance, power and area. Building upon Intel’s advanced technology, the IP portfolios of IFS Accelerator include all the essential IP blocks needed for modern Systems-On-Chip (SoC), such as standard cell libraries, embedded memories, general purpose I/Os, analog IP and interface IP.

A new generation of devices that demand independent learning and inference capabilities, faster response times and limited power consumption has created opportunities for new products with smarter sensors, devices, and systems. Integrating AI into the SoC delivers efficient compute and the unique learning and performance requirements of Edge AI.  BrainChip’s Akida™, enables low-latency and ultra-low power AI inference and on-chip learning.

Akida neuromorphic processor IP, mimics the human brain to analyze only essential sensor inputs at the point of acquisition—processing data with unparalleled efficiency, precision, and economy of energy. Keeping AI/ML local to the chip and independent of the cloud dramatically reduces latency while improving privacy and data security.

The fully customizable event-based AI neural processor enables inference and learning at the edge. Its scalable architecture and small footprint boosts efficiency by orders of magnitude — supporting up to 1024 nodes that connect over a mesh network.

Every node consists of four Neural Processing Units (NPUs), each with scalable and configurable SRAM. Within each node, the NPUs can be configured as either convolutional or fully connected. The Akida neural processor is event based — leveraging data sparsity, activations, and weights to reduce the number of operations by at least 2x.

“The combination of BrainChip’s Akida IP and Intel’s leading technology helps ensure that customers looking to implement edge AI acceleration and learning have the tools and resources to accelerate their success,” said Anil Mankar, Chief Development Officer at BrainChip.

www.brainchip.com

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News

Share:

Linked Articles
10s