Naveen Rao on Why AI Needs a New Computing Substrate
Published 2026-05-06 - Runtime about 14 min - Watch on YouTube
Naveen Rao’s core claim is that the next AI leap will not come from squeezing more out of GPUs, but from changing the substrate itself. He argues intelligence is hitting an energy wall, and that brains show a vastly more efficient path built on nonlinear dynamics rather than matrix math.
What Matters
- Rao says the world will hit an AI power wall in 2-4 years, not 10, because inference and training are already consuming many gigawatts.
- His comparison is blunt: humanity runs on about 160 gigawatts total, or roughly 20 watts per brain.
- He says today’s GPU stack is about three orders of magnitude away from the thermodynamic limit of intelligence per watt.
- The target is not better matrix math, but physics-native computation: nonlinear dynamics, time-domain processing, and no von Neumann back-and-forth to memory.
- He treats biology as an existence proof: a macaque brain is under 1 watt, and a squirrel can coordinate complex motion on under 10 milliwatts.
- The chip strategy is already moving fast: his team went from no team in January to a full prototype in 6 months, which he credits to AI.
- In his model, compute and state overlap in the physics itself, so the system is set once, kicked, and then allowed to run instead of being stepped line by line.