François Chollet: Why Scaling Alone Isn’t Enough for AGI
François Chollet argues scaling LLMs alone won’t reach true AGI and explains why his lab Ndea is building a fundamentally different symbolic learning substrate.
- Chollet predicts AGI by 2030–early 2030s, around when ARC v6 or v7 would be released.
- Ndea replaces neural net parametric curves with the shortest possible symbolic programs, using ‘symbolic descent’ instead of gradient descent.
- Chollet estimates Ndea has only a 10–15% chance of success but calls it worth attempting because no one else will.
- ARC v1 base models scored sub-10% even after 50,000x scale-up in pretraining; only reasoning models like OpenAI o1/o3 broke the benchmark.
- ARC v2 was saturated (97%) by YC W26 company Confluence Labs in ~2 months using RL harnesses and verifiable reward loops.
- ARC v3 measures agentic intelligence: agents dropped into novel mini-games with no instructions, scored on action efficiency matching humans (~hundreds to thousands of actions).
- Chollet believes the final AGI codebase will be under 10,000 lines of code and could theoretically have been built in the 1980s with available compute.
- Domains without verifiable rewards (essays, law) will see very slow or stalled LLM progress; code and math will continue rapid gains.
2026-03-27 · Watch on YouTube