Karpathy on Software 3.0, Verifiability, and Agentic Engineering
Published 2026-04-29 - Runtime about 30 min - Watch on YouTube
Karpathy’s core claim is that LLMs are no longer just a coding assistant layer: they are becoming the programmable substrate. That shifts software from writing explicit instructions to directing agentic systems, and it changes what teams build, how they hire, and what skills still matter.
What Matters
- December was the inflection point for Karpathy: latest models stopped needing constant correction, and agentic workflows started to feel coherent.
- Software 3.0 means prompting and context become the control surface; the LLM is the interpreter, not just a tool inside old software.
- His menu example is the point: the “app” can collapse into one prompt to Gemini plus Nano Banana, with the model doing the OCR and rendering.
- Verifiable domains move fastest because RL can train against rewards; math, code, and adjacent tasks are where models peak first.
- He thinks almost everything is eventually automatable, but some work is only “automatable from a distance” until it becomes verifiable.
- Agentic engineering is the discipline of keeping professional quality bars while using spiky, stochastic agents that can speed up real software work.
- Human value shifts to taste, judgment, spec design, and oversight; even in agent-heavy workflows, he still wants unique user IDs and precise system design.
- “You can outsource your thinking but you can’t outsource your understanding” is his education thesis: intelligence may get cheap, but comprehension remains the bottleneck.