MENU

Chip architecture can perform AI inference and learning

Chip architecture can perform AI inference and learning

Technology News |
By eeNews Europe



The company calls it “the first conditional execution architecture for artificial intelligence” and has announced its flagship product: Grayskull.

The processor’s architecture scales from battery-powered IoT devices to large cloud servers, Tentorrent claims and the company’s team comprises alumni from hardware companies such as Nvidia and AMD. The company is backed by Real Ventures and Eclipse Venture Capital.

Tenstorrent’s approach is based on the dynamic elimination of unnecessary computation – things such as multiplication by zeroes but also many other things that can be present in neural networks.

This stripping of the math load breaks the link between model size growth and computing and memory bandwidth requirements. This so-called conditional computation enables adaption of a neural network model to both exact input presented for both inference and learning. One example is natural language processing where conditional computing can dynamically prune portions of model depending on the amount of text presented and on other input characteristics.

Next: Grayskull


The Tenstorrent architecture features an array of Tensix cores, each comprising a packet processor and a programmable SIMD and math computational block, along with five single-issue RISC cores. The array of Tensix cores is stitched together with a custom, double 2D torus network on a chip (NoC).

Grayskull integrates 120 Tensix cores, with 120Mbytes of local SRAM. This AI processor also provides eight channels of LPDDR4, supporting up to 16Gbytes of external DRAM and 16 lanes of PCI-E Gen 4. On a 75W bus-powered PCIE card, Grayskull achieves 368TOPS and, powered by conditional execution, up to 23,345 sentences/second using BERT-Base for the SQuAD 1.1 data set.

Tenstorrent has benchmarked Grayskull against machine learning models such as BERT, ResNet-50 and others and claims orders of magnitude orders of improvement.

“The past several years in parallel computer architecture were all about increasing TOPS, TOPS per watt and TOPS per cost, and the ability to utilize provisioned TOPS well. As machine learning model complexity continues to explode, and the ability to improve TOPS oriented metrics rapidly diminishes, the future of the growing computational demand naturally leads to stepping away from brute force computation and enabling solutions with more scale than ever before,” said Ljubisa Bajic, founder and CEO of Tenstorrent. “Tenstorrent was created with this future in mind. Today, we are introducing Grayskull, Tenstorrent’s first AI processor that is sampling to our lead partners, and will be ready for production in the fall of 2020.”

Related links and articles:

www.tenstorrent.com

News articles:

GrAI Matter, Paris research gives rise to AI processor for the edge

AI processor startup emerges from Imagination

Eta ships AI processor for sensor applications

Former Apple, Google hardware engineers create processor startup

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News

Share:

Linked Articles
10s