
Baya’s NeuraScale fabric sees 100x AI chiplet boost
US startup Baya Systems has developed a chip interconnect that it says can accelerate AI chiplet performance by a factor of 100.
The NeuraScale switch fabric technology is designed to overcome critical scaling and data movement challenges in AI infrastructure with a 100x increase in node density for the UALink standard.
This is a key factor for AI workloads that need higher performance, throughput and efficiency where traditional crossbar switching architectures are struggling to keep pace.
- Baya Systems raises $36m for chiplet technology
- Startup emerges from stealth with chiplet interconnect
- Arteris supports AI chip design with tiling mesh network IP
- AI tool boosts chip interconnect
The NeuraScale switch fabric is a fundamentally new approach that supports 256 ports per chiplet with 1 Tb/s throughput, enabling next-generation AI infrastructure. This can switch at over 2GHz in a 4 nm process technology with less than 20 ns maximum latency and 2048-bit data width for efficient data transfer.
It is also a fully modular architecture that simplifies physical implementation for each SoC, while enabling seamless scaling across multiple chiplets and supports the current ARM AMBA interconnect protocols and is designed for future UALink, UCIe and Ultra Ethernet compliance.
“AI workload demands have outpaced what traditional crossbar interconnects can provide for the next generation of scale-up,” said Dr. Sailesh Kumar, Baya Systems’ CEO. “NeuraScale empowers a radical growth in node density, with highly energy-efficient and area-efficient switching solutions for next-gen AI – the key challenges to scale. With our platform, we enable unprecedented performance and scale, while substantially reducing the implementation challenges and time to market, which are serious challenges for traditional switching.”
Tenstorrent licenses IP from Baya Systems for AI/RISC-V chiplets
He sees this used with its WeaverPro software-driven interconnect development platform that streamlines design, analysis and optimization to speed up development cycles. It simplifies large-scale interconnect implementation, overcoming key bottlenecks in traditional crossbar architectures, while offering flexible, multi-protocol support to ensure interoperability and adaptability for AI datacentres.
“Nvidia has demonstrated the benefits of high-performance networking for scaling AI with its NVLink proprietary interconnect,” said Karl Freund, founder of Cambrian-AI Research. “Now the industry is developing the UALink standard, which could democratize scaling, but it needs rapid innovation in SoC and chiplet-ready switching technologies creating opportunities for companies like Baya Systems and their fabric solutions.”
“The Ultra Accelerator Link Consortium (UALink) was formed with a vision of creating an open ecosystem that boosts innovation and collaboration in scale-up, and our first specification will support up to 1K accelerators in an AI pod,” said Peter Onufryk, President of the UALink Consortium. “We welcome Baya Systems, which joins a rapidly growing set of game changers meeting the challenges of AI scaling.”
Baya Systems has active collaborations with lead partners that are using NeuraScale for chip designs and the technology will be available to a broader ecosystem in Q2 2025. Tenstorrent is already using Baya technology for a RISC-V AI chiplet design.
