AI inferencing at 100 times lower MAC power and 20 fold lower cost
Sagence AI™ has emerged from stealth to unveil a groundbreaking advanced analog in-memory compute architecture that directly addresses the untenable power/performance/price and environmental sustainability conundrum facing AI inferencing.
Based on industry-first architectural innovations using analog technology, Sagence AI makes possible multiple orders of magnitude improvement in energy efficiency and cost reductions for AI inferencing, while sustaining performance equivalent to high performance GPU/CPU based systems.
Compared to the leading volume GPU processing the Llama2-70B large language model with performance normalized to 666K tokens/s, Sagence technology performs with 10 times lower power, 20 times lower price, and 20 times smaller rack space. Using a modular chiplet architecture for maximum integration, the technology enables a highly efficient inference machine that scales from data center generative AI to edge computer vision applications across multiple industries. The technology balances high performance and low power at affordable cost — addressing the growing ROI problem for generative AI applications at scale, as AI compute in the data center shifts from training models to deployment of models to inference tasks.
“A fundamental advancement in AI inference hardware is vital to the future of AI. Use of large language models (LLMs) and Generative AI drives demand for rapid and massive change at the nucleus of computing, requiring an unprecedented combination of highest performance at lowest power and economics that match costs to the value created,” said Vishal Sarin, CEO & Founder, Sagence AI. “The legacy computing devices today that are capable of extreme high performance AI inferencing cost too much to be economically viable and consume too much energy to be environmentally sustainable. Our mission is to break those performance and economic limitations in an environmentally responsible way.”
“The demands of the new generation of AI models have resulted in accelerators with massive on-package memory and consequently extremely high-power consumption. Between 2018 and today, the most powerful GPUs have gone from 300 W to 1200 W, while top-tier server CPUs have caught up to the power consumption levels of NVIDIA’s A100 GPU from 2020,” said Alexander Harrowell, Principal Analyst, Advanced Computing, Omdia. “This has knock-on effects for data center cooling, electrical distribution, AI applications’ unit economics, and much else. One way out of the bind is to rediscover analog computing, which offers much lower power consumption, very low latency, and permits working with mature process nodes.”
Sagence AI leads the industry on the frontier of in-memory compute innovation. It is the first to do deep subthreshold compute inside multi-level memory cells, an unprecedented combination that opens doors to the orders of magnitude improvements necessary to deliver inference at scale. As digital technology reaches limits in ability to scale power and cost, Sagence innovated a new analog path forward leveraging the inherent benefits of analog in energy efficiency and costs to make possible mass adoption of AI that is both economically viable and environmentally sustainable.
In-memory computing aligns closely with the essential elements of efficiency in AI inference applications. Merging storage and compute inside memory cells eliminates single-purpose memory storage and complex scheduled multiply-accumulate circuits that run the vector-matrix multiplication integral to AI computing. The resulting chips and systems are much simpler, lower cost, lower power and with vastly more compute capability.
Sagence views the AI inferencing challenge not as a general-purpose computing problem, but a mathematically intensive data processing problem. Managing the massive amount of arithmetic processing needed to “run” a neural network on CPU/GPU digital machines requires extremely complicated hardware reuse and hardware scheduling. The natural hardware solution is not a general-purpose computing machine, rather an architecture that more closely mirrors how biological neural networks operate.
The statically scheduled deep subthreshold in-memory compute architecture employed by Sagence chips is much simpler and eliminates the variabilities and complexities of the dynamic scheduling required of CPUs and GPUs. Dynamic scheduling places extreme demands on the SDK to generate the runtime code and contributes to cost and power inefficiencies. The Sagence AI design flow imports a trained neural network using standards-based interfaces like PyTorch, ONNX and TensorFlow, and automatically converts it into Sagence format. The Sagence system receives the neural network long after GPU software created it, negating further need of the GPU software.