Anthropic co-founder: AGI predictions, leaving OpenAI, what keeps him up at night | Ben Mann

· startups · Source ↗

Summary based on the YouTube transcript and episode description.

Anthropic co-founder Ben Mann argues AGI is likely by 2028, 20% unemployment is inevitable, and existential risk sits at 0–10%.

  • Mann puts 50th-percentile odds of superintelligence arriving by 2028, citing the AI 2027 report forecasters who quietly shifted their own estimate to 2028.
  • Existential risk from AI: Mann’s personal estimate is 0–10%, but he calls it critical to address precisely because almost no one is working on it.
  • Meta’s $100M signing bonuses are real; Mann says Anthropic retains people because the best-case scenario at Meta is profit, while at Anthropic it is shaping humanity’s future.
  • Constitutional AI works by having the model critique and rewrite its own outputs against a set of natural language principles drawn from the UN Declaration of Human Rights and other sources—no human raters required.
  • Claude’s personality and low sycophancy are a direct byproduct of alignment research, not a separate UX effort.
  • Claude Code team says 95% of their code is now written by Claude; Intercom’s Fin hits 82% autonomous customer-service resolution.
  • Model intelligence bottleneck is compute first (chips and power), then algorithms and researchers—a 10x cost reduction per unit of intelligence has already happened and could yield 1,000x smarter models in three years at current prices.
  • Mann teaches his kids curiosity, kindness, and creativity via Montessori; says traditional academic credentialing will not matter in an AI-abundant world.

2025-07-20 · Watch on YouTube