324 Views

UAlink Challenges Nvidia in Data Center Interconnect

LinkedIn Facebook X
June 01, 2024

Get a Price Quote

The data centre industry is undergoing a significant transformation with the emergence of the Ultra Accelerator Link (UALink) Consortium, a collaborative effort by major tech giants to challenge Nvidia's NVlink GPU interconnect technology. AMD, Broadcom, Cisco, Google, Hewlett Packard Enterprise, Intel, Meta, and Microsoft have joined forces to develop an open standard interface for low-latency interconnects in next-generation AI Accelerators within data centres.

While Nvidia currently holds a dominant position in the market with its proprietary NVlink technology, the formation of UALink signifies a shift towards a more open and collaborative approach to interconnect technologies. Both NVlink and UALink offer alternatives to traditional Ethernet, Ultra Ethernet (UEC), or Infiniband technologies commonly used in AI data centres.

"In a very short period of time, the technology industry has embraced challenges that AI and HPC have uncovered. Interconnecting accelerators like GPUs requires a holistic perspective when seeking to improve efficiencies and performance," said J Metz, Chair of the Ultra Ethernet Consortium. The collaboration between UEC and UALink aims to address the evolving needs of AI computing pods by combining scale-up and scale-out protocols.

The UALink Promoter Group comprises companies with extensive experience in developing large-scale AI and HPC solutions based on open standards and robust ecosystem support. The group is working on a specification to define a high-speed, low-latency interconnect for scalable communications between accelerators and switches in AI computing pods.

The 1.0 specification being developed by the UALink Promoter Group will enable the connection of up to 1,024 accelerators within an AI computing pod, facilitating direct loads and stores between the memory attached to accelerators. The UALink Consortium is expected to be incorporated in Q3 of 2024, with the 1.0 specification becoming available to member companies.

"The work being done by the companies in UALink to create an open, high-performance, and scalable accelerator fabric is critical for the future of AI," said Forrest Norrod, executive vice president and general manager at AMD. The commitment to advancing AI technology through open standards and collaboration is echoed by other founding members of the UALink Consortium.

"Broadcom is proud to be one of the founding members of the UALink Consortium, building upon our long-term commitment to increasing large-scale AI technology implementation in data centers," stated Jas Tremblay, vice president and general manager of the Data Center Solutions Group at Broadcom. The emphasis on open ecosystem collaboration is seen as essential for enabling scale-up networks with diverse high-speed and low-latency solutions.

"Open standards are crucial for HPE as we innovate in supercomputing and expand access to systems," said Trish Damkroger, senior vice president and general manager at HPE. As a founding member of the UALink Consortium, HPE aims to contribute expertise in high-performance networking and systems to develop a new open standard for accelerator interconnects in the next generation of supercomputing.

"UALink represents an important milestone in the advancement of Artificial Intelligence computing," said Sachin Katti, SVP & GM, Network and Edge Group at Intel Corporation. Intel's leadership in creating an open AI ecosystem is further demonstrated through its involvement in the UALink Consortium and other standards bodies.

With the collaborative efforts of industry leaders through the UALink Consortium, the future of data centres and AI computing is poised for significant advancements in performance, efficiency, and scalability. The shift towards open standards and collaborative innovation is set to redefine the landscape of interconnect technologies in data centres, paving the way for a more interconnected and efficient AI ecosystem.

Recent Stories