The company is adopting a “processing-in-memory” approach to neural network implementation and the funding was led by Draper Fisher Jurvetson and included Lux Capital, Data Collective and AME Cloud Ventures. Steve Jurvetson of DFJ and Shahin Farshchi of Lux Capital have joined the Mythic board of directors.
Prior to the Series A Mythic had raised about $2.5 million in grant support according to reports. Mythic has followed the path of another startup – Ambiq – in moving to Austin to commericalize research out of the University of Michigan.
Co-founders Mike Henry (CEO) and Dave Fick (CTO) developed a deep learning inference model at the Michigan Integrated Circuits Lab based on hybrid digital and analog computation. The system transfer deep learning computations to the memory structures storing algorithm parameters, such as neural network connection weights.
“We saw early on the opportunities afforded AI from processor-in-memory technology and new methods of computing outside of the binary 1 and 0 world. This would become a foundation for the Mythic platform that revolutionizes local AI design and performance,” said Henry in a statement. It is also notable that Mythic is looking to recruit non-volatile memory engineers.
A company that is developing a processor to be implemented in DRAM technology and sit close to memory is Upmem SA (Grenoble, France) although it is expected to produce a more conventional form of processor with the differentiation being its abiity to sit monolithically close to memory.
The Series A will enable Mythic to begin the implementation phase for a neural network chip. The company is seeking out early adopters to field test the technology and volume shipments are projected for mid-2018.
Next: Compared against what?
The company claims will provide GPU compute capabilities and deep neural networks with 50x higher battery life and far more data processing capabilities than competitors.
It is not clear what competition Mythic is comparing itself against, probably neural network software running inefficiently on conventional processor, but the company’s pitch is that machine learning is currently done by moving data to the cloud and it needs to be done at the leaf-node device.
Typical functions would include speech- and gesture recognition as well as computer vision and collision avoidance. Typical applications would include consumer electronics, drones and robotics but also autonomous vehicles.
Related links and articles:
News articles:
Startup plans to embed processors in DRAM
ARM’s soft launch for machine learning library
Graphcore gets big backing for machine learning
ST, FDSOI lead machine learning spike at ISSCC
Xilinx invests in neural network startup
Intel discusses neural network inside Quark MCU