If an LLM solves this then we'll probably have AGI – Francois Chollet
Watch on YouTube ↗ Summary based on the YouTube transcript and episode description.
Francois Chollet argues LLMs operate via memorization and ARC benchmark performance at 80% human-level would signal likely AGI.
- Chollet: if an LLM hits 80% on ARC benchmark with no anticipatory training, that likely signals AGI.
- ARC has resisted memorization for 4+ years since release, passing a meaningful test of time.
- Chollet acknowledges ARC can be cheated at scale by brute-forcing hundreds of millions of programmatically generated task variations.
- He frames intelligence as a pathfinding algorithm in future-situation space, requiring genuine adaptation not just memory.
- Key claim: LLMs do memorization, not reasoning — but so do most humans, who memorize skills and algorithms rather than deriving them.
- Chollet concedes memorization alone could automate nearly all remote worker jobs and generate trillions in economic value, still within the memorization regime.
- The fundamental limit of memorization is static distributions — it breaks down when the world changes unpredictably.
2024-06-13 · Watch on YouTube