Softbank's AI servers at Foxconn are now being managed by robots thanks to the innovative rack development by Softbank with direct chip cooling technology. This development is a significant step forward as Softbank is currently funding the $100 billion Stargate project in the US in collaboration with OpenAI. The project aims to deploy AI data centers across the country equipped with Nvidia Blackwell GPU chips, with Foxconn being highlighted as a potential equipment supplier. The rack design is also designed to be 'robot friendly', enabling automated server replacement in clusters that can house up to 100,000 GPUs.
Softbank has announced that this is the first demonstration of ZutaCore's two-phase DLC (Direct Liquid Cooling) technology in an AI server using the latest generation Nvidia H200 Hopper GPUs at the rack scale. This innovative cooling system circulates an insulating refrigerant without water in a sealed two-phase system with liquid and gas, effectively cooling the cold plate on the GPU inside the Foxconn server.
While Pegatron has already certified the ZutaCore technology for its AI server boards, Iceotope in the UK has also introduced a direct cooling system for AI server racks. Softbank has also designed and developed a rack-integrated system that integrates each component of the server, including cooling equipment with two-phase DLC technology, on a rack scale. The operational demonstration and performance evaluation took place at its data center in February 2025.
Hironobu Tamba, Vice President and Head of the Data Platform Strategy Division at SoftBank, stated, "As we work to develop one of the largest AI infrastructures in Japan and homegrown large-scale language models (LLMs), improving the energy efficiency of data centers is essential to accelerate AI development and realize a sustainable society." The operational demonstration of the rack-integrated solution with optimized two-phase DLC technology has confirmed stable operation and improved energy efficiency of high-density GPU servers with liquid cooling.
The plug and play system simplifies the installation and rollout of AI racks in data centers with 21-inch servers and conventional 19-inch servers compliant with the ORV3 standard from the Open Compute Project (OCP). The rack design with direct cooling allows for automated robot management of the data center, which is crucial for the rollout of AI factories. The system also automatically recognizes peripherals and makes the necessary settings when peripherals are connected to the main unit of a device.