AI 2027: month-by-month model of intelligence explosion — Scott Alexander & Daniel Kokotajlo
Watch on YouTube ↗ Summary based on the YouTube transcript and episode description. Prompt input used 79979 of 194775 transcript characters.
Scott Alexander and Daniel Kokotajlo present a month-by-month scenario claiming the intelligence explosion compresses decades of progress into 2027–2028.
- Daniel’s 2021 post predicting AI through 2026 was largely correct; AI 2027 is the sequel, now going through the intelligence explosion.
- Their model posits a 5x algorithmic R&D multiplier by early 2027, compressing what would be 50–70 years of progress into 2027–2028.
- The scenario depicts the leading AI lab deliberately staging dramatic demos to wake up and lobby the U.S. president for faster development and red-tape cuts.
- A senior AI researcher they interviewed saves 4–8 hrs/week in familiar domains but 24 hrs/week in unfamiliar ones — the productivity gain is larger where AI is less like autocomplete.
- Metaculus AGI median was ~2050 in 2020, dropped to ~2030 now; Scott argues aggregate forecasters have consistently been too pessimistic, not too optimistic.
- LLMs not making scientific discoveries is explained by lack of targeted scaffolding and wrong training incentives, not a fundamental ceiling — they predict this gets overcome.
- The document’s branching point is mid-2027: either labs stay competitive or one pulls decisively ahead, triggering a race-condition with China that shapes whether democratic checks survive.
- On post-AGI redistribution, Scott warns the most likely political outcome is feudal job protection (like longshoremen) rather than UBI, even with a superintelligent oracle available to advise otherwise.
2025-04-03 · Watch on YouTube