2025 models will be more like coworkers than search engines – OpenAI cofounder John Schulman
Watch on YouTube ↗ Summary based on the YouTube transcript and episode description.
OpenAI cofounder John Schulman predicts 2025 models will act as long-horizon coding collaborators, not single-query tools.
- In 1-2 years, models will execute full coding projects: multiple files, testing, iteration — not single function suggestions.
- The key unlock is long-horizon RL training, teaching models to carry out extended tasks rather than single steps.
- Current models are already near-human smart per-token; the deficit is coherence over time, not raw intelligence.
- Stronger models generalize better from sparse data — a few examples of error recovery may be enough vs. large domain-specific datasets for weaker models.
- Schulman does not expect long-horizon RL alone to unlock AGI — he flags other unspecified deficits beyond coherence.
- Models becoming fully functional colleagues is directionally plausible but Schulman can’t name the exact remaining blocker.
- Training signal matters whether supervising final output or each intermediate step — both approaches expected to help.
2024-05-14 · Watch on YouTube