Dylan Patel — The single biggest bottleneck to scaling AI compute

· video · Source ↗

Summary based on the YouTube transcript and episode description. Prompt input used 79979 of 169644 transcript characters.

Dylan Patel of SemiAnalysis argues ASML’s EUV machine production — not power or data centers — will be the binding constraint on AI compute scaling by 2028–2030.

  • ASML can produce ~70 EUV machines today, growing to ~100/year by 2030; at 3.5 tools per gigawatt, the global ceiling is ~200 gigawatts of AI chips by decade’s end.
  • A single gigawatt of Nvidia Rubin capacity requires ~55,000 wafers of 3nm, ~6,000 of 5nm, and ~170,000 DRAM wafers — translating to ~2 million EUV passes.
  • H100 spot prices have risen to $2.40/hr on 2–3 year contracts, well above the ~$1.40/hr build cost, because demand outpaces supply of older Hopper capacity.
  • Anthropic’s conservatism on compute contracts has forced it toward lower-quality providers and revenue-share arrangements, while OpenAI locked in 5-year deals early.
  • Huawei arguably has every capability leg Nvidia has — software, networking, AI talent, fabs — and if not banned from TSMC would likely be its largest customer today.
  • A Taiwan disruption would collapse incremental AI compute additions from hundreds of gigawatts/year to roughly 10–20 gigawatts across Intel and Samsung combined.
  • Robot AI workloads will remain largely cloud-centralized because on-device processing wastes scarce leading-edge chips that would otherwise go to data centers.

2026-03-13 · Watch on YouTube