Ilya Sutskever – We're moving from the age of scaling to the age of research

· Source ↗

Summary based on the YouTube transcript and episode description. Prompt input used 79979 of 89170 transcript characters.

Ilya Sutskever argues the 2020-2025 scaling era is ending and fundamental research on generalization — not more compute — is now the bottleneck to superintelligence.

  • Sutskever forecasts human-level continual learning AI in 5–20 years, after which recursive improvement becomes tractable.
  • Pre-training data is finite; companies now spend more compute on RL than pre-training, but RL environments are over-indexed on evals, causing real-world underperformance.
  • The core unsolved problem: models generalize dramatically worse than humans, even in domains like math and coding that have no evolutionary prior.
  • SSI’s differentiated bet is that cracking reliable generalization is the path to safe superintelligence — not incremental scaling.
  • Sutskever confirmed SSI was fundraising at a $32B valuation when Meta offered to acquire it; he declined but his co-founder left to Meta with liquidity.
  • Long-run equilibrium concern: personal AI agents could make humans non-participants in their own lives; his reluctant solution is neural-link-style human-AI merger.
  • Research taste, per Sutskever: seek beauty, simplicity, and brain-inspired correctness simultaneously — top-down conviction is what lets you keep debugging when experiments contradict you.

2025-11-25 · Watch on YouTube