
ZeroPoint compression increases AI memory by 50 percent
ZeroPoint Technologies AB (Gothenburg, Sweden) has announced a hardware-assisted compression technology that it claims can be used to increase the effective memory of foundational AI models, such as large language models (LLMs), by 50 percent.
ZeroPoint, a compression specialist, was founded in 2015 by Professor Per Stenström and Angelos Arelakis as a spin out from Chalmers University of Technology.
The product, called AI-MX, enables compression and decompression of deployed foundational models, including large language models (LLMs). The product will be ready to be delivered to customers and partners in 2H25.
The use of AI-MX will allow enterprise and hyperscale datacenters to increase many performance metrics – including effective addressable memory, memory bandwidth, tokens served per second – ZeroPoint said. In addition, ZeroPoint’s proprietary hardware-accelerated compression, compaction, and memory management technologies operate nanosecond latencies which can be 1,000 times faster than traditional compression algorithms, the company said.
AI-MX works with a wide variety of memory types, including HBM, LPDDR, GDDR and DDR – ensuring that the memory optimization benefits apply to most AI acceleration use cases.
“With today’s announcement, we introduce a first-of-its-kind memory optimization solution that has the potential to save companies billions of dollars per year related to building and operating large-scale datacenters for AI applications,” said Klas Moreau, CEO of ZeroPoint Technologies, in a statement.
ZeroPoint’s proprietary hardware-accelerated compression, compaction, and memory management technologies operate at low nanosecond latencies, enabling them to work more than 1000 times faster than more traditional compression algorithms.
ZeroPoint Technologies said it aims to exceed the 1.5x increases to capacity and performance in subsequent generations of the AI-MX product.
Related links and articles:
Technical specifications of AI-MX
News articles:
ZeroPoint’s DenseMem compression saves power in the datacenter
Lightelligence adds ZeroPoint compression to CXL interconnect
Memory compression IP startup raises €2.5 million
