136 Views

NVLink Fusion Teams Up with Arm Neoverse for Advanced AI Data Centers

LinkedIn Facebook X
November 19, 2025

Get a Price Quote

Arm and NVIDIA are extending their long-running collaboration with the integration of NVIDIA NVLink Fusion into the Arm Neoverse platform. The announcement underscores the accelerating shift toward energy-efficient AI data center architectures. As hyperscalers race to boost AI throughput without escalating power budgets, the companies aim to deliver a unified, coherent infrastructure capable of scaling to next-generation workloads.

For eeNews Europe readers working as system architects, chip designers, and data-center engineers, this development offers insight into how future AI platforms will be built. It appears new architectural options are on the table for coherent CPU-accelerator integration and rack-scale system design.

Expanding Neoverse for the AI era

Arm noted it is positioning Neoverse as the compute foundation for power-efficient, highly scalable AI deployments. With more than 1 billion Neoverse cores already deployed and broad hyperscaler adoption from AWS, Google, Microsoft, Oracle, and Meta, the platform is on track to reach 50% market share across top cloud providers in 2025. Major AI data-center initiatives (such as the Stargate project) are also committing to Arm-based compute for energy-efficient scaling.

Against this backdrop of rising demand, Arm says it is extending Neoverse with NVIDIA NVLink Fusion to bring Grace Blackwell-class performance, bandwidth, and coherency to the broader ecosystem. NVLink Fusion provides a high-bandwidth, coherent pathway between CPUs, GPUs, and accelerators, enabling partners to build custom, rack-scale AI systems with flexible accelerator choices.

“Arm and NVIDIA are working together to set a new standard for AI infrastructure,” Rene Haas, CEO, Arm, noted. “Extending the Arm Neoverse platform with NVIDIA NVLink Fusion brings Grace Blackwell-class performance to every partner building on Arm — a milestone that reflects the incredible momentum we’re seeing in the data center.”

Jensen Huang, founder and CEO of NVIDIA, added: “NVLink Fusion is the connective fabric of the AI era — linking every CPU, GPU and accelerator into one unified rack-scale architecture. Together with Arm, we’re extending this vision across Neoverse to empower innovators everywhere to design the next generation of specialized AI infrastructure.”

Coherent integration for high-bandwidth AI systems

The companies highlight that NVLink Fusion is designed to interface directly with AMBA CHI C2C, Arm’s coherent, high-bandwidth chip-to-chip protocol. By enabling Neoverse with the latest C2C specification, Arm indicated that it ensures seamless integration between Arm-based CPUs and accelerators connected over NVLink Fusion.

This compatibility gives ecosystem partners a coherent fabric for moving data efficiently across heterogeneous AI systems, reducing memory bottlenecks, accelerating system development, and cutting time-to-market for emerging AI accelerators.

The approach builds on previous achievements such as the Grace Hopper and Grace Blackwell platforms, where tight CPU-GPU coherency set new benchmarks for high-performance computing. With NVLink Fusion now available across the full Neoverse ecosystem, Arm and NVIDIA are widening access to these capabilities, enabling differentiated system designs optimized for intelligence per watt.

A strengthening partnership

As AI models grow in size and power budgets tighten, the Arm–NVIDIA partnership seems to be evolving toward deeper co-design at the platform level. By combining Arm’s energy-efficient compute with NVIDIA’s high-bandwidth interconnects, the companies look to be aiming to define the next generation of AI data-center architecture.

For AI hardware developers and data-center operators in Europe, this integration signals a new phase of scalable, energy-efficient infrastructure — one where coherent CPU-accelerator designs become the norm rather than the exception.

Recent Stories


Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.