Graphcore's execs on machine learning, company building

November 01, 2016 // By Peter Clarke
Graphcore Ltd. (Bristol, England), a startup developing a machine learning processor, is not ready to make details of its hardware architecture public but CEO Nigel Toon and CTO Simon Knowles were prepared to discuss some of the background thinking with EE Times Europe.

Toon said that Graphcore currently stands at 40 employees and that the $30 million raised in the recently announced Series A (see Graphcore gets big backing for machine learning ) would be used to complete the first design and for some limited expansion. "We could have taken more but this is sufficient to get product out," said Toon. "We will keep the engineering based here in Bristol but there is scope for some customer support and business development roles in Silicon Valley, Seattle and China," he added.

Nigel Toon, CEO and co-founder of Graphcore.

Toon acknowledged that there is one other major technology company, besides Samsung and Robert Bosch, that contributed to the Series A funding . He said that company has chosen not to go public on the investment.

With regard to the Intelligent Processor Unit (IPU) Knowles commented: "We will release our technology in the second half of 2017. It is a brand new, from-scratch design."

Much of the team had previously worked with Knowles at Element 14 designing for wireline, and at Icera designing for wireless. Now the team is doing the same for machine learning.

What Graphcore has said about the IPU – on its website – is that it will include massively parallel, low-precision floating-point compute and a much higher compute density than other solutions. The IPU will hold the complete machine learning model inside the processor and have 100x memory bandwidth than other solutions.

This will be backed up with an IPU-Appliance intended to increase the performance of both training and inference by between 10x and 100x compared to contemporary systems; and the IPU-Accelerator, a PCIe card designed to plug into a conventional server computer to accelerate machine learning applications.

Knowles said: "It will be a very large chip. We have not taped out, and because it is a large chip we cannot really benefit from doing test circuits on shuttle runs. Fortunately, we