64 Views

TSMC AI capacity: Huang says Nvidia demand could force a doubling

LinkedIn Facebook X
February 02, 2026

Get a Price Quote

Nvidia CEO Jensen Huang has told reporters in Taipei that Taiwan Semiconductor Manufacturing Co. (TSMC) will have to scale wafer output aggressively to keep up with demand for AI hardware—suggesting Nvidia’s needs alone could push the foundry to more than double its capacity over the next decade.

Huang made the remarks after hosting a high-profile supplier dinner in Taipei, attended by executives including TSMC CEO C.C. Wei and Foxconn chairman Young Liu, according to a Reuters report. Tom’s Hardware, citing the South China Morning Post, also reported Huang saying TSMC needs to “work very hard” this year because Nvidia “needs a lot of wafers”.

The comments are a blunt restatement of what many in the supply chain already feel: AI compute is no longer the only constraint. It is wafer starts, advanced packaging throughput, and memory availability all moving together. In reporting by Taiwan’s Central News Agency, republished by Focus Taiwan, Huang said Nvidia is in volume production of its Blackwell platform while also manufacturing its next-generation Vera Rubin chips, which he described as comprising multiple advanced designs—adding that TSMC is working “very, very hard” to meet demand.

TSMC, for its part, has signalled heavier investment. Reuters noted that TSMC said capital spending could rise as much as 37% this year to $56 billion and that spending would increase “significantly” in 2028 and 2029 in response to AI-driven demand. That kind of capital intensity fits Huang’s underlying point: meeting AI demand is turning into a long-duration build-out, not a one-cycle bump.

If Nvidia’s roadmap is the bellwether, then the next few years are likely to be defined by competition for leading-edge wafers, CoWoS-class advanced packaging, and high-bandwidth memory allocation. Huang has also warned that memory supply is a real pain point this year—an issue that can slow system shipments even when GPUs are available. The practical consequence is that supply constraints may show up as longer lead times and prioritisation decisions across hyperscalers, OEMs, and enterprise buyers, rather than as a single, obvious “shortage” of one component.

From a European perspective, the interesting angle is how much of this build-out can be distributed across geographies without undermining the clustering benefits that made Taiwan so dominant in the first place. As previously reported by eeNews Europe when TSMC and Amkor outlined advanced packaging plans for Arizona, leading-edge manufacturing is only part of the throughput equation; packaging capacity increasingly decides how quickly “available silicon” turns into shippable AI accelerators.

Huang’s “double capacity” line might prove directionally right even if the exact number moves around—because the industry’s constraint has become end-to-end throughput. That is hard to fix quickly, and even harder to fix cheaply.

Recent Stories


Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.