Space and Time Vision with NimbleAI’s 3D Chip Design

Space and Time Vision with NimbleAI’s 3D Chip Design

News |
By Wisse Hettinga

The Horizon Europe project aimed to push the boundaries of neuromorphic vision

The 3-year and 10 M€ research project which will improve energy-efficiency and performance of next-generation neuromorphic chips that sustain event-based vision. Their goal is to create an integral neuromorphic sensing-processing 3D silicon stacked architecture to efficiently run accurate and diverse computer vision algorithms in resource- and area-constrained chips destined to endpoint devices.

As seen in recent technological trends, biological systems are a “golden benchmark” for electronic systems as well as a source of inspiration. NimbleAI thus leverages the power of nature’s computing: the project’s global system architecture is based on and motivated by the biological eye-brain system. These organic vision capabilities are honed by natural selection and apply the fundamental energy-saving principle of capturing, processing, and storing data only when necessary. Hence, eyes continuously sense and encode the changing surrounding environment in a way that is manageable for the brain.

Following bio-inspired principles, NimbleAI will deal with the conscious and unconscious rationale behind the decision-making process on what visual stimuli to process – and what stimuli to discard without processing. In NimbleAI, a frugal always-on sensing stage will build basic understanding of the visual scene and drive a multitiered collection of highly specialized event-driven processing kernels and neural networks to perform visual inference of selected stimuli using the bare minimum amount of energy.

NimbleAI envisions a highly integrated 3D stacked silicon architecture where sensing, memory, communications, and processing are physically fused and accuracy, energy, resources, and time are dynamically traded off to enhance the overall perception, that is, maximize the amount of valuable visual information that can be captured and timely processed. As it occurs in biological visual systems, sensing and processing components will be adjusted at runtime to match each other, and operate jointly at the optimal temporal and data resolution scale across image regions.

NimbleAI expects to achieve 100x energy-efficiency improvement and 50x latency reduction (with regards to CPU/GPU) by relying on the technologies, components, and techniques listed below:

  • Light-field-enabled dynamic vision sensing.
  • Event-based inference and processing.
  • Specialized processing with in-memory computing and programmable logic (eFPGA).
  • Embedded ReRAM-based storage.
  • 3D integration of circuit layers (TSV-based inter-layer data movement).
  • Mutual adaptation of sensing and processing to operate at optimal DVFS point.
  • Dedicated software tools.

More information at

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News


Linked Articles