Close
Close
Close

Close

12.4.2025

Why every major supplier says demand will outstrip capacity — no matter how much they spend. 

The Q3 2025 earnings season brought unusual alignment across the semiconductor ecosystem. CEOs from TSMC, SK Hynix, Micron, Intel, NVIDIA, and Samsung all delivered the same message: demand for advanced nodes, advanced packaging, and high-bandwidth memory (HBM) is rising much faster than capacity can be built. 

This is the clearest signal yet that AI’s supply chain bottlenecks are not short-term “tightness.” They are structural limits that will shape pricing, lead times, and availability well into 2027. 

1. CoWoS Is the Epicenter of the Bottleneck

If one technology defines the current constraint, it is CoWoS (Chip-on-Wafer-on-Substrate), the advanced packaging process underpinning nearly all high-end AI accelerators. TSMC executives were unusually direct: 

  • “Our CoWoS capacity is very tight and remains sold out through 2025 and into 2026.” — C.C. Wei, TSMC CEO 
  • “Backend capacity for leading-edge nodes is extremely tight... Supply chain constraints may impact production timelines.” — C.C. Wei, TSMC CEO 

CoWoS is the key enabler for GPUs and AI accelerators, including NVIDIA Blackwell and AMD MI355 chips. Without this packaging step, even 3nm wafers cannot become functional AI chips. 

NVIDIA confirmed the same pressure point: 

  • “Ongoing limitations in component supply, such as HBM memory, pose short-term challenges for Blackwell production... CoWoS assembly capacity is oversubscribed through at least mid-2026.” — NVIDIA Management 

Despite expansions from TSMC and OSATs, advanced packaging capacity remains the most constraining part of the AI semiconductor supply chain.

2. HBM Is Sold Out Through 2026 (HBM3E Included)

HBM, especially HBM3 and HBM3E, remains the single tightest component in the AI stack. 

  • “We have already sold out our entire 2026 HBM supply.” — Kim Jae-joon, SK Hynix CFO 
  • “Our HBM capacity for calendar 2025 and 2026 is fully booked.” — Sanjay Mehrotra, Micron CEO 
  • “HBM bit demand is growing exponentially… tight supply-demand balance through 2026 and beyond.” — C.C. Wei, TSMC 

Samsung reinforced the pricing impact: 

  • “Raising HBM prices by high-teens to low-twenties percent in 2026 contracts.” — Samsung Electronics 

Three structural realities are now clear: 

  1. HBM wafer starts cannot scale fast enough: HBM uses more process steps than standard DRAM, and validation cycles for HBM3E are longer. 
  2. Nearly all incremental supply is going to AI server builders: Hyperscalers are locking multi-year allocations for HBM3E and next-gen HBM4. 
  3. DRAM tightness will spread into 2026
As SK Hynix noted: “DRAM market is expected to remain in shortage throughout 2026, especially high-end products.”

This isn’t a short-term squeeze. HBM is becoming the defining constraint for the AI market. 

3. Leading-Edge Foundry Nodes (3nm & 2nm) Cannot Keep Up 

Advanced logic capacity, particularly TSMC 3nm and early 2nm ramp, is now experiencing the same pressure. 

In the most recent update reported by Tom’s Hardware, TSMC stated that: 

  • Demand for advanced-node wafers is currently “about three times short” of the company’s available capacity 
  • Even with significant capex, TSMC’s wafer output is “still not enough” to support AI demand 

This aligns directly with commentary from major chipmakers: 

  • “Customers’ demand for the next year will exceed our supply, even considering our investment and capacity expansion plan.” — Samsung Memory 

Advanced-node scarcity is no longer speculative. It is already reshaping wafer allocation for 2026–2027, with ripple effects across GPUs, networking ASICs, and high-performance compute.  

Suppliers Agree: If They Had More Capacity, They Could Sell 20–50% More 

Across HBM, advanced packaging, and 3nm wafers, supplier commentary converges on one reality: 

Demand is not slowing, supply is the ceiling. 

TSMC, SK Hynix, Micron, Samsung, NVIDIA, and Intel all reported: 

  • oversubscribed packaging 
  • sold-out HBM3E & HBM capacity 
  • heavy constraints in 2nm/3nm 
  • lead-time extensions 
  • multi-year backlog increases 

In past cycles, shortages were cyclical. In the AI cycle, shortages are architectural. 

What Procurement Leaders Need to Do Now 

  1. Budget for rising HBM and DRAM pricing in 2026 (high-teens to 20% increases). Samsung and SK Hynix have already signaled contract repricing. 
  2. Prioritize allocations across HBM, CoWoS, and advanced-node logic. Demand curves in these segments are structurally above capacity curves. 
  3. Requalify alternative sources where possible (Micron HBM, Samsung HBM). Multi-supplier strategies reduce exposure. 
  4. Expect lead times on AI-related logic to extend into mid-2026.
  5. Treat AI demand as structural, not cyclical.

As C.C. Wei emphasized: “The structural AI-related demand continues to be very strong.” 

The Bottom Line 

The next two years of AI hardware growth will be shaped by CoWoS bottlenecks, HBM3E scarcity, and 3nm/2nm wafer constraints. Every major supplier is signaling deep, structural tightness, not temporary volatility — across the most critical enablers of AI infrastructure. 

Procurement teams that plan for these constraints today will be the ones prepared for the supply environment of 2026–2027. 

Want monthly updates on HBM supply, advanced packaging constraints, 3nm availability, and emerging component shortages?
Sign up for Fusion Worldwide’s Greensheet, our industry insights briefing designed for procurement, sourcing, and supply chain leaders navigating fast-moving market conditions.

 

Frequently Asked Questions

1. Why is CoWoS capacity the main bottleneck for AI chips right now?

CoWoS (Chip-on-Wafer-on-Substrate) is the critical packaging process that enables HBM to sit next to GPUs and AI accelerators. Even if wafer supply increases, chips cannot be assembled without CoWoS capacity. TSMC, NVIDIA, and multiple OSATs reported that CoWoS is oversubscribed through at least 2026, making it the single tightest part of the AI semiconductor stack.

2. How long will HBM shortages last?

Based on Q3 earnings calls from SK Hynix, Micron, and Samsung, HBM supply is fully allocated through 2026, including HBM3E. Both demand growth and manufacturing complexity limit how quickly suppliers can expand output. Early signals suggest tightness could extend into 2027, especially as hyperscalers and GPU vendors secure long-term contracts.

3. What is driving the surge in demand for HBM3E and high-bandwidth memory?

AI infrastructure requires higher memory bandwidth and lower latency than traditional DRAM can provide. New architectures such as NVIDIA Blackwell and AMD MI355 depend on HBM3E stacking. As AI workloads continue to scale, HBM becomes the performance limiter — and thus the hardest-to-source component.

4. Why is advanced-node capacity (3nm / 2nm) failing to meet demand?

TSMC reported that advanced-node wafer demand is “about three times” greater than its available supply, driven by AI accelerators, networking ASICs, and power-efficient CPUs. Even with record capex, ramping new nodes takes years, not quarters. As a result, 3nm and early 2nm availability remains structurally constrained.

5. How will these shortages impact pricing for AI components?

Suppliers have already signaled upward pricing pressure. Samsung expects high-teens to low-20% price increases for HBM in 2026 contracts. Limited CoWoS slots and oversubscribed wafer starts will also support elevated pricing for advanced-node logic and next-generation accelerators.

6. Are shortages concentrated only in AI products, or will other markets feel the impact?

AI is absorbing most of the incremental supply for HBM, CoWoS, and 3nm wafers — but the ripple effects extend much further. Suppliers warn of broader DRAM tightness, longer lead times for networking ASICs, and reduced availability for high-performance CPUs used in telecom, cloud, and storage systems.

7. Can alternative memory (e.g., GDDR6, DDR5) offset HBM constraints?

Not for AI training or high-end inference. HBM’s bandwidth and power efficiency are essential for accelerator performance, meaning GDDR6 and DDR5 cannot replace HBM for these workloads. In some mid-tier applications, however, customers may shift to GDDR-based designs as a stopgap.

8. Are new fabs or packaging facilities likely to relieve constraints?

Not in the near term. Even with aggressive investment from TSMC, Samsung, Micron, and OSATs, meaningful CoWoS and HBM expansion will take 12–24 months. Rumored long-term moves — such as Broadcom evaluating its own fab strategy — would not impact supply until 2027 or later.

9. What procurement strategies are most effective during an HBM and CoWoS shortage?

Teams are prioritizing:

  • Early allocation commitments with suppliers
  • Multi-supplier qualification for memory (Micron + SK Hynix + Samsung)
  • Buffer inventory for critical AI-related components
  • Flexibility across logic SKUs and power-management devices
  • Visibility into long-term build plans

Being proactive is essential; suppliers universally confirmed demand far exceeds near-term capacity.

10. How long will these structural constraints last?

Most suppliers expect tight conditions through 2026, with some signals pointing into 2027. Because the demand is structural—not cyclical—capacity expansion is unlikely to outpace AI adoption over the next two years.

 

WORLD CLASS SERVICE.

Let Fusion Worldwide solve your supply chain needs.

EMAIL: info@fusionww.com GIVE US A CALL: +1.617.502.4100