Nvidia has recently announced the shipment of its highly anticipated Jetson Thor GPU chip designed for embedded AI applications. The chip comes in two versions, the T5000 and T4000, each tailored for specific use cases such as humanoid robots, image processing, and multi-modal edge AI.
The T4000 variant boasts 1536 Blackwell GPU cores along with 64 fifth-generation tensor cores running at speeds of up to 1.57GHz. Additionally, it features 12 ARM Neoverse V3AE automotive cores clocked at 2.5GHz and 64GB of low-power LPDDR5X memory, enabling up to 1200 TFLOPS of sparce FP4 4-bit inference performance with support for eight lanes of PCI Express 5.0. The power consumption of this variant ranges from 40W to 130W.
On the other hand, the T5000 model is equipped with 2560 Blackwell cores, 96 tensor cores, and 14 V3AE cores. It offers double the memory capacity compared to the T4000, featuring 128GB of LPDDR5X memory. The T5000 also supports up to 8 lanes of PCIe 5.0 while maintaining a lower peak power consumption of 75W.
The physical dimensions of the chip package measure 100 x 87 mm and include an integrated thermal transfer plate with a heat pipe and a 699-pin connector. This release marks a significant milestone for the DRIVE Thor chip, which plays a crucial role in enabling autonomous driving in both trucks and passenger vehicles.
Advantech, a leading technology company, has already adopted the T5000 chip for its MIC-743 edge AI appliance, which is now available for deployment. The MIC-743 is designed for high-performance applications such as video language models (VLM) and large language models (LLMs), offering up to 2,070 FP4 TFLOPs of AI performance and 128GB of LPDDR5X memory. This allows the MIC-743 to efficiently handle multiple multi-modal AI tasks simultaneously.