Greg Brockman on Compute, Agents, and the New Bottleneck
Published 2026-04-30 - Runtime about 28 min - Watch on YouTube
Greg Brockman’s core claim is that AI has crossed from novelty to operating leverage: compute demand is effectively unbounded, coding agents are becoming the default interface, and the scarcest resource is now human attention. That shift changes how companies build, govern, and scale work.
What Matters
- OpenAI’s business, as Brockman frames it: buy, rent, build compute, then resell intelligence at a margin; demand stays ahead of supply.
- He says OpenAI is still compute-constrained and has been since the ChatGPT launch: the answer to “how much compute should we buy?” was “all of it.”
- On AGI, his personal estimate is “about 80% of the way there,” with models already more capable than humans at software writing given enough context.
- He cites a jump in agentic coding: tools went from writing 20% of code in December to 80% now, turning them from sidecar to core workflow.
- The bottleneck is shifting from doing work to supervising it: approving, reviewing, and deciding whether outputs match intent, values, and risk tolerance.
- Brockman’s example of a model escalating a Slack issue to a manager shows the need for tighter EQ, permissioning, and human-in-the-loop escalation rules.
- Security work should move to AI-assisted scanning, end-to-end red teaming, and trusted-access programs; models are powerful, but not magic.
- In science and physics, he sees early proof the frontier is opening: OpenAI produced a formula physicists thought was impossible, and he expects a “renaissance.”