fbpx
News Hub

Tesla owns 7th-largest supercomputer in the world

Written by Fri 19 Aug 2022

Tesla company building

A Tesla employee tweeted that the company now has the world’s 7th largest supercomputer, measured by GPU count.

This achievement was the result of a recent upgrade during which Tesla added 1600 additional Nvidia A-100 GPUs.

This tweet, from Tim Zanan (Engineering Manager at Tesla), noted that gave the GPU totals for the upgrade, and noted that the result of the upgrade was that the company now operates the 7th largest supercomputer in the world.

The Tesla supercomputer, related to the company’s Project Dojo, was originally unveiled at Tesla’s 2021 AI Day. At that time, the company claimed that it was the fastest training machine in the world, optimised for neural net video training.

Tesla CEO Elon Musk teased the Djo supercomputer, even projecting that the supercomputer would have capacity exceeding an exaflop – potentially making Dojo the most powerful supercomputer in the world.

Ganesh Venkataramanan, senior director and Dojo project leader, unveiled the Dojo D1 chip at the 2021 Tesla AI Day. At the time, he highlighted the company’s expertise in chip design, noting “This was entirely designed by Tesla team internally. All the way from the architecture to the package. This chip is like GPU-level compute with a CPU level flexibility and twice the network chip level IO bandwidth.”

To create a ‘training tile’, which the company presents as a chip breakthrough, nodes are connected – from interface to power, to thermal management – without any ‘glue.’

Tesla plans to train its own neural networks, which are (understandably) tied to self-driving technology, using the unparalleled video training capabilities of the new supercomputer. However, it also plans to make this technology available to other AI developers in the future. Moreover, the company is teasing 10x improvements in performance for the next level of Dojo.

Tesla has claimed that its precursor cluster has specs that include:

● 1.8 EFLOPS (720 nodes * 312 TFLOPS-FP16-A100 * 8 gpu/nodes)

● 10 PB of “hot tier” NVME storage @ 1.6 TBps

● 640 Tbps of total switching capacity

This would make the Tesla supercomputer roughly 5th – 7th most powerful in the global supercomputer game

Written by Fri 19 Aug 2022

Tags:

business supercomputer Tesla U.S.
Send us a correction Send us a news tip