The processors are the ARM ML and the ARM OD with OD standing for object detection. These devices will enable trillions of operations per second and are intended to be used on edge devices such as within mobile phones. Indeed, ARM claims the ML processor is the "most efficient solution to run neural networks."
The ARM ML comprises fixed-function engines and programmable-layer engines for selected primitive operations while allowing for innovation and future algorithms. A network control unit manages the overall execution of the neural network and a DMA unit moves data in and out of main memory. Onboard memory allows central storage for weights and feature maps, thus reducing traffic to the external memory and, therefore, power.
Not much detail yet in ARM's description of its ML processor. Source: ARM
The performance is said to be greater than 4.6TOPS in mobile environments with an efficiency of 3TOPS per watt. It is also said to provide a "massive efficiency uplift from CPUs, GPUs, DSPs and accelerators."
However, while the design is said to be tuned for advanced geometry implementation, ARM does not indicate whether the IP has already been licensed to lead partners. Also ARM does not indicate more precisely the nature of the programmable and fixed-function engines or what data types are supported.