Winbond stacks memory die for generative AI at the edge

Winbond stacks memory die for generative AI at the edge

Technology News |
By Nick Flaherty

Winbond Electronics in Taiwan has launched a technology to stack memory on a processor for generative AI in edge devices.

The Winbond customized ultra-bandwidth elements (CUBE) enable memory technology to be optimized for generative AI on hybrid edge/cloud applications.

CUBE can be used with front-end 3D structures such as chip on wafer (CoW) and wafer on wafer (WoW), as well as back-end 2.5D/3D chip on Si-interposer on substrate and fan-out solutions. The technology is compatible with memory density from 256Mbit to 8Gbit with a single die and it can also be 3D stacked to enhance bandwidth while reducing data transfer power consumption.

The technology supports power efficiency, consuming less than 1pJ/bit, ensuring extended operation and optimized energy usage with bandwidth from 32GB/s to 256GB/s per die.

Winbond says it is taking a major step forward with CUBE, enabling seamless deployment across various platforms and interfaces. The technology is suited to advanced applications such as wearable and edge server devices, surveillance equipment, ADAS and co-bots.

By stacking the SoC (top die without TSV) on top of the CUBE, it becomes possible to minimize the SoC die size, eliminating any TSV penalty area. This not only enhances cost advantages, but also contributes to the overall efficiency, including small form factor of Edge AI devices.

“The CUBE architecture enables a paradigm shift in AI deployment,” says Winbond. “We believe that the integration of cloud AI and powerful edge AI will define the next phase of AI development. With CUBE, we are unlocking new possibilities and paving the way for improved memory performance and cost optimization on powerful Edge AI device.”

CUBE offers a range of memory capacities from 256Mb to 8Gb per die, based on the 20nm specification now and 16nm in 2025. This allows CUBE to fit into smaller form factors. Using through-silicon vias (TSVs) further enhances performance, improving signal and power integrity while reducing the IO area through a smaller pad pitch. This also reduces heat dissipation, especially when using SoC on the top die and CUBE on the bottom die.

The CUBE IO boosts an impressive data rate of up to 2Gbps with total 1K IO. When paired with legacy foundry processes like 28nm/22nm SoC, CUBE provides high bandwidth capabilities, reaching 32GBs-256GB/s equivalent to the bandwidth of HBM2 memories.

“CUBE can unleash the full potential of hybrid edge/cloud AI to elevate system capabilities, response time, and energy efficiency,” Winbond added. “Winbond’s commitment to innovation and collaboration will enable developers and enterprises to drive advancement across various industries.”

Winbond is actively engaging with partner companies to establish the 3DCaaS platform, which will leverage CUBE’s capabilities. By integrating CUBE with existing technologies, Winbond aims to offer cutting-edge solutions that empower businesses to thrive in the era of AI-driven transformation.

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News


Linked Articles