MemryX, (Ann Arbor, Mich.) has said its MX3 Edge AI Accelerator is now generally available as a standalone chip and as a four-chip module.
MemryX is a 2019 startup developing “at-memory” AI computing for edge devices that announced it was sampling its MX3 in August 2023.
The MX3 chip consumes between 0.5 and 2.0W 0.5-2.0W depending on the demands of the AI model and system settings, said MemryX. The M.2 card can continuously run one or more AI models on 10s of incoming camera streams, which is a game changer for edge applications such as video management.
MX3 uses floating-point activations, and the compilation process maintains AI models as trained. This means customers using MX3 do not need to retrain models or use pilot images during feature map quantization to increase accuracy or approximate operators unsupported
MemryX does not use or require a model zoo, where customer models are modified to fit the target hardware. A single MX3 can be used, or can be combined with additional MX3 chips without software changes and using the same host interface without added any hardware such as PCIe switches.
“After many months of rigorous testing, we are very excited to announce we have reached the production milestone of our Edge AI Accelerator,”said Keith Kressin, CEO of MemryX, in a statement.
Early customers have used M3 in applications that inlude retail, security, agriculture, automotive and robotics.
Related links and articles:
News articles:
At-memory AI startup MemryX heads to India
Edge AI accelerator chip offers ‘one-click’ compiler