Machine learning interview with Jem Davies of ARM

February 15, 2018 // By Peter Clarke
Machine learning interview with Jem Davies of ARM
Processor IP licensor ARM has announced a dedicated machine learning processor core and eeNews Europe spoke with Jem Davies, general manager of the machine learning group at ARM, and Dennis Laudick, vice president of marketing for machine learning (ML), to find out more.

Davies recently migrated from leading ARM's graphics and vision business (see Thoughts on Jem Davies leading ARM's machine learning group).

Davies started our interview by making the point that machine learning computation by way of neural networks is a fundamental shift in computation and that ARM has been taking its time to try and make sure its architectural approach is sufficiently general and scalable to have a long life in the market. It has now completed designing the first hardware implementation, the ML processor, which it will distribute to licensees sometime in the middle of 2018. It is also offering an iteration of its object detection image processor (see ARM launches two machine learning processors); both under the Project Trillium banner.

We asked what process node the ML processor core is targeting.

"Machine learning is coming to every market segment we operate in, therefore the IP could be deployed in many different nodes. Nonetheless, the ML processor is aimed at premium smartphone market which today implies designs aimed at 7nm," Davies said. "The 16nm node could be an alternative and then there are also things like smart IP cameras that we are also targeting. And the 28nm node will go on for a long time, so it could turn up there."

Machine learning will reach from sensors to servers. Source ARM.

That said, ARM's engineers have had to make choices about the size of the circuit and how many resources to include. "The first ML processor is a fixed configuration aimed at premium smartphone. Then there will be scalable, configurable IP." Davies declined to say how large the fixed-configuration core is in a 7nm process but said the intention is that it would easily fit inside an application processor SOC.

When we offered up a size of one square millimeter, Davies said it was of the right order plus or minus 50 percent. Another clue to the size comes from the power consumption. ARM reckons that the ML processor is capable of more than 4.6 tera operations per second at an efficiency of 3TOPS per watt.

Next: Home grown

Vous êtes certain ?

Si vous désactivez les cookies, vous ne pouvez plus naviguer sur le site.

Vous allez être rediriger vers Google.