The D1 chip, announced at the company’s live-streamed AI Day event, is part of the company’s previously announced Dojo neural network training supercomputer system, which is being developed to “process truly vast amounts of video data,” such as video feeds from Tesla vehicles. Based on a 7-nanometer manufacturing process and optimized for machine learning workloads, the D1 chip features 362 teraflops of processing power.
The chips are said to comprise over 50 billion transistors and have a die size of 645 mm2. The company says that 25 of these chips are placed onto a single “training tile,” 120 of which are then combined across several server cabinets, resulting in over an exaflop of power.
Ganesh Venkataramanan, senior director of Autopilot hardware at Tesla told AI Day event particpants and viewers, “We are assembling our first cabinets pretty soon.”
The Tesla technology, says the company, will be the fastest AI-training computer.
As for when the Dojo supercomputer will be ready, “We should have Dojo operational next year,” says Tesla CEO Elon Musk.
Nvidia unveils CPU for giant-scale AI and HPC
Microsoft announces new supercomputer, large-scale AI vision
Industry’s largest AI processor has 2.6 trillion transistors
Intel shifts AI chip focus
IBM, DOE unveil ‘world’s fastest’ AI supercomputer