Using light to calculate and communicate within the chip reduces heat — leading to orders of magnitude reduction in energy consumption per chip and dramatic improvements in processor speed, says the company. The optical AI processor is designed to solve the growing need for computation to support next-generation AI algorithms.
"The Department of Energy estimates that by 2030, computing and communications technology will consume more than 8 percent of the world's power," says Nicholas Harris, PhD, founder and CEO at Lightmatter. "Transistors, the workhorse of traditional processors, aren't improving; they’re simply too hot. Building larger and larger datacenters is a dead end path along the road of computational progress."
“We need a new computing paradigm," says Harris. "Lightmatter's optical processors are dramatically faster and more energy efficient than traditional processors. We're simultaneously enabling the growth of computing and reducing its impact on our planet."
The 3D-stacked chip package contains over a billion FinFET transistors, tens of thousands of photonic arithmetic units, and hundreds of record-setting data converters, says the company. The photonic processor runs standard machine learning frameworks including PyTorch and TensorFlow, enabling state-of-the-art AI algorithms.
This new architecture, says the company, is a massive advancement in the development of photonic processors. The performance of this photonic processor is offered as proof that the company's approach to processor design delivers scalable speed and energy efficiency advantages over the current electronic compute paradigm and is the starting point for a roadmap of chips with dramatic performance improvements.
Google invests in photonic AI startup
Light-powered AI chip moves closer to market
Prototype optical AI accelerator chip debuts
Optical AI startup raises funds from Baidu, U.S. semi execs