MENU

Tesla’s Dojo ‘training tile’ in production at TSMC

Tesla’s Dojo ‘training tile’ in production at TSMC

Technology News |
By Peter Clarke

Cette publication existe aussi en Français


Car maker Tesla’s 25-chiplet Dojo processor is in production at TSMC, according to reports from the foundry’s North American Technology Symposium held in Santa Clara, California, recently.

TSMC described it as “the world’s first system-on-wafer in production for next-generation data centers.”

The Dojo processor, or Dojo training tile as Tesla calls it is a 5 by 5 array of D1 AI processors placed on a carrier wafer and interconnected using TSMCs integrated fan-out (InFO) technology to create a system-on-wafer (SoW).

Dojo is a supercomputer designed by Tesla to run machine learning models to support Full Self-Driving (FSD) advanced driver-assistance system. The Tesla D1 is an internally developed AI inference and training processor of about 50 billion transistors made using TSMC’s 7nm manufacturing process. Each D1 provides 362 Tflops of computational capability. Many such tiles are incorporated within a supercomputer although the current draw and heat generation is reportedly problematic.

Wafer-scale packaging from TSMC

The first installation of Dojo supercomputer cabinets began in Palo Alto, Calif. with Tesla building a Dojo data center at its headquarters in Austin, Texas. Tesla has also announced a US$500 million project to build a supercomputer cluster in Buffalo, New York.

TSMC reportedly said at the symposium it plans to offer more complex wafer-scale packaging in 2027. This would provide up to 40 reticle-sized chips plus up to 60 high-bandwidth memory chips on a wafer.

TSMC is the foundry supplier to Cerebras Systems Inc. which already makes wafer-scale rectangular multi-die processors. Cerebras recently launched its third-generation wafer-scale AI accelerator. The CS-3 accelerator has over 4 trillion transistors in 900,00 cores – and is 2x faster than its predecessor and sets records in training large language and multi-modal models. The company used its CS-2 accelerator to deliver 4exaFLOP AI supercomputer in 2023.

Related links and articles:

www.tsmc.com

News articles:

TSMC plans 1.6nm process for 2026

Tesla signs Indian chip supply deal with Tata, says report

Cerebras shows third generation wafer scale AI processor

If you enjoyed this article, you will like the following ones: don't miss them by subscribing to :    eeNews on Google News

Share:

Linked Articles
10s