FPGA chip maker Flex Logix is taking on industry giant Nvidia with a new chip for machine learning for vision systems. It has adapted its interconnect fabric for an application specific inference engine for machine learning at the edge using a 16nm process but is looking at the second generation on 7nm.
The chip is optimised for video images and large machine learning models, rather than being a general purpose AI chip, says Geoff Tate, CEO and co-founder of Flex Logic and former founder of Rambus and general manager of AMD’s processor business (above left with his co-founders).
“Our focus is on the edge, out in the real world, ultrasound systems, camera applications, autonomous vehicles, gene sequencing and automatic inspection,” said Tate.
“Other than autonomous vehicles, customers have a single sensor and bringing in rectangular ‘images’ with depth information. All these customers have a single model and they don’t necessarily care how other models run and finding a chip that will run it fast and cheap, and this is where we get application specific inference. They want more throughput and lower cost.”
Like competitor Blaize, the Flex Logix chip is a graph processor, relying on the compiler to allocate the resources of the chip to the AI model.
“Ultrasound or MRI use big models and large images,” he said. “The smallest is 0.5Mpixel up to 4Mpixel. We run the largest models – the weights are 62Mbytes of data and our customers want to run big images and don’t want to give up on precision.”
“You can’t run these models cost effectively on an FPGA – if you want to implement the entire model you need a very large FPGA and those are very expensive and every layer of the model has to be implemented. We discarded that idea a long time ago and we solve the