Why I don’t think AGI is right around the corner
Watch on YouTube ↗ Summary based on the YouTube transcript and episode description.
Dwarkesh Patel argues continual learning — not raw capability — is the core bottleneck blocking transformative AI, and puts reliable computer use at 2028 and on-the-job learning at 2032.
- Anthropic researchers Sholto Douglas and Trenton Bickens predicted >25% of white-collar jobs automated in 5 years; Dwarkesh puts it under 25% if progress stopped today.
- Lack of continual learning is the central bottleneck: LLMs can’t improve on the job the way a human employee does after 6 months.
- Dwarkesh spent ~100 hours building LLM tools for his own post-production workflow; rates them 5/10 on simple, self-contained language tasks.
- Reliable end-to-end computer use (e.g., fully handling small-business taxes) is his 50/50 bet for 2028 — framing current computer-use models as GPT-2 era with no pretraining corpus.
- On-the-job learning matching a 6-month human employee for any white-collar role: 50/50 bet for 2032.
- After 2030, training-compute scaling hits physical limits (chips, power, GDP share), so further progress must come from algorithmic breakthroughs where low-hanging fruit will be gone.
- When continual learning is solved, AI copies could amalgamate learnings across every job simultaneously — a broadly deployed intelligence explosion even without further algorithmic progress.
- R1/o1 took 2 years from GPT-4 despite a conceptually simple RL idea, signaling that computer use — harder, sparser rewards, different modality — will take longer than labs project.
2025-08-01 · Watch on YouTube