Anthropic has signed a compute agreement with SpaceXAI to use all of the capacity at Colossus 1, the Memphis AI data centre originally built around xAI infrastructure. The Anthropic compute deal gives Claude access to more than 300 MW of additional capacity and more than 220,000 NVIDIA GPUs within the month.
The company said the extra capacity will directly improve service for Claude Pro and Claude Max subscribers. In a company statement, Anthropic said it is doubling Claude Code’s five-hour rate limits for Pro, Max, Team and seat-based Enterprise plans, removing the peak-hours limit reduction for Pro and Max accounts, and raising API limits for Claude Opus models.
The Anthropic compute deal is notable because it gives the Claude maker a large block of NVIDIA GPU capacity while its longer-running infrastructure arrangements come online. Anthropic said it continues to train and run Claude across AWS Trainium, Google TPUs and NVIDIA GPUs, rather than relying on a single accelerator or cloud platform.
The SpaceXAI agreement follows several other large AI infrastructure commitments. Anthropic has pointed to an Amazon arrangement of up to 5 GW, including nearly 1 GW of new capacity by the end of 2026, a 5 GW Google and Broadcom agreement beginning in 2027, a Microsoft and NVIDIA partnership including $30 billion of Azure capacity, and a $50 billion US AI infrastructure investment with Fluidstack.
That makes Colossus 1 a short-term capacity release valve rather than a replacement for Anthropic’s broader cloud and silicon strategy. As previously reported by eeNews Europe when Anthropic expanded its Google and Broadcom TPU plans, the company is already committed to multi-gigawatt AI compute build-outs that extend beyond 2026.
For SpaceXAI, the deal turns Colossus 1 into a commercial compute asset for a major external AI customer. The xAI announcement says Colossus 1 includes dense deployments of NVIDIA H100, H200 and GB200 accelerators for training, fine-tuning, inference and high-performance computing workloads.
The arrangement is also unusual because Anthropic and xAI compete at the model layer. Anthropic sells Claude services to developers, enterprises and consumers, while xAI develops Grok. The deal shows how scarce high-end AI infrastructure has become: competitors may still become customers when the bottleneck is power, land, GPUs, cooling and deployment speed.
Anthropic has also expressed interest in partnering with SpaceXAI to develop multiple gigawatts of orbital AI compute capacity. For now, that remains a statement of interest rather than a detailed deployment plan. No timetable, technical architecture, launch schedule or regulatory route has been disclosed.
The immediate story is more terrestrial. Anthropic needs more compute to support Claude Code, Claude Opus and paid Claude subscribers. SpaceXAI has a very large GPU cluster that can be leased. The Anthropic compute deal therefore fits the wider AI infrastructure pattern: frontier model companies are buying capacity wherever they can get it, while accelerator supply and power availability continue to shape the pace of AI product rollout.