
Cognifiber raises $6m for In-Fibre photonic processing
Cognifiber in Israel has raised $6m to commercialise its photonic technology for processing neural networks inside an optical fibre without the need for AI accelerators or memory.
Just as there is a move to process data in memory with in-memory computing (see links below), so in-fibre photonic processing enables computing on the fly within an optical fibre.
This is achieved by a technology called DeepLight that takes advantage of the crosstalk noise between fibres in an optical cable.
In-fibre photonic processing
The principal of in-fibre computing is exactly the same as a transistor with a source, drain and gate. It allows controllable, programable interactions between input data channels such that the output light transmits the exact desired mathematical function. In this manner both linear operations such as Multiply-Accumulate (MAC) and nonlinear operations such as sigmoid functions can be implemented.
The fact that the control of these interactions can be done in a configurable manner, results in a photonic device that can implement many interchangeable functions by simply re-programming the control parameters.
A paper in Nature in 2016 demonstrates that light routing between cores can be controlled by amplification patterns that are injected optically into the processors allowing for direct photonic compute from start to end without memory read/write operations.
The resulting system has a simple, direct and reliable I/O interface with a fibre connector for each channel and uses low-loss, resilient and robust signal transmission with no inter-processor interference. This can be used for ‘On-the-Fly’ inference of neural networks without the need for memory read/write of intermediate results, dramatically cutting the power consumption, and can scale up to systems with over 100 and even 1,000 channels.
Calculation from a proof of concept show that the DeepLight Technology will reach over 100,000,000 TOPS (over 100 exa-operations per second) with efficiency of 1,000 TOPs/W by 2026.
“Our fibre optics-based system delivers a 100-fold boost in speed while reducing power consumption by 80%”, said Dr. Eyal Cohen, Co-founder & CEO of CogniFiber (above). ““This new ability to compute complex algorithms in a fraction of the time will leave Moore’s law in the rearview mirror.”
The series A round was led by Chartered Group, a private equity firm specializing in disruptive technologies with a presence in Europe and Asia.
Following the completion of a successful proof of concept, the funding will help complete the first full system prototype, expected in April 2022 and shown at the CLEO conference in San Jose in May 2022. The first products, expected by the end of 2023, will implement a trainable photonic Auto-Encoder neural network system with expected inference performance of >400 million tasks per second, 100 times faster than today’s silicon-based AI accelerators with a power consumption of less than 500W.
This first line of products is aimed at applications in Industrial IoT and cybersecurity which use massive amounts of real-time, on-premise, auto-encoder based functions such as anomaly detection, transformation, de-noising and compression. Datacentre acceleration can also benefit from a high performance, low-power auto-encoder.
Before officially founding the company in 2018, CogniFiber’s co-founders spent years bringing the technology to life, including working within Intel’s Ingenuity Partner Program.
“Senior engineers at Intel were excited to drill down into every aspect of our technology, reviewing its viability to revolutionize how computers operate today,” said Professor Ze’ev Zalevsky, Co-founder and CTO of CogniFiber. “This technology will help deliver on the promise of photonic computing to safety, cybersecurity, autonomous driving, AI-developed medicines, and countless other applications.”
The company has filed eleven patent applications, of which three are accepted and four are pending.
Related in-memory computing articles
NeuroBlade raises $83m for compute in memory chip
sureCore develops in-memory computing for edge AI
Samsung uses MRAM for AI in-memory computing
Test chip created for Analog in Memory Computing
In-memory computing provider expands funding round
