The chip is known as the Apple Neural Engine and Apple engineers are said to be “racing” in an attempt to catch up other companies moving forward aggressively on cores and chips to perform the very large number of multiplications used in neural networks.
Google has developed and taken to silicon two Tensor Processor Units (TPUs) in the last couple of years (see Google’s second TPU processor comes out).
Neural networks which require millions of weightings to be multiplied against input data and subsequent layers of neurons have conventionally been run in software on uniprocessors. However, the enormous benefit they can take from parallelism has seen them being deployed on GPUs and more-or-less application specific DSPs.
The kind of applications that can benefit from this include voice and face recognition and classification and inference of varied types of data where dedicated hardware can provide a couple of orders of magnitude improvement over conventional processors.
In particular Apple has aspirations in mixed-reality and self-driving cars that need the efficiency of dedicated machine learning hardware to reduce energy consumption and latency, the report said.
Apple also intends to include the neural engine in future versions of the iPhone and the iPad, according to the report. This has the advantage that recognition tasks that would otherwise have to be sent up to the cloud could be done locally.
Apple could discuss the neural engine at its upcoming annual developers conference in June, where it is also expected to introduce the next version of its operating system iOS11.