Leopold Aschenbrenner — 2027 AGI, China/US super-intelligence race, & the return of history

· ai · Source ↗

Watch on YouTube ↗ Summary based on the YouTube transcript and episode description. Prompt input used 79979 of 250000 transcript characters.

Leopold Aschenbrenner argues scaling plus unhobbling produces AGI by 2027 and that OpenAI fired him for raising CCP espionage concerns to the board.

  • Aschenbrenner was fired from OpenAI primarily because he shared an internal security memo with the board after a major security incident; HR told him worrying about CCP espionage was racist.
  • He predicts AGI (drop-in remote worker level) by 2027–2028, driven by 10GW training clusters costing hundreds of billions.
  • Compute scaling trend: GPT-4 cluster ~$500M / 10MW; 2026 cluster ~$10B / 1GW; 2028 cluster ~$100B+ / 10GW; 2030 trillion-dollar cluster at 100GW, over 20% of US electricity.
  • The data wall is underrated as a risk: Llama 3 already near the limit of all available internet data; repeating data yields diminishing returns past ~16x repetition.
  • Test-time compute overhang is the key near-term unlock: GPT-4 doing millions of tokens of coherent reasoning (vs. hundreds today) could substitute for orders-of-magnitude model size gains.
  • Building AGI clusters outside the US (e.g. UAE) is equivalent to doing the Manhattan Project in a foreign country — Aschenbrenner argues it is a serious national security error.
  • Post-AGI intelligence explosion math: half an order of magnitude compute + half an order of magnitude algorithmic progress per year compounds to ~10x effective compute annually, dwarfing normal economic growth.
  • OpenAI’s stated leak was Aschenbrenner sharing a preparedness brainstorming doc with three external researchers; the sensitive line cited was predicting AGI by 2027–2028, something Sam Altman says publicly.

2024-06-04 · Watch on YouTube