CACM article explains why today’s large context windows underperform and what hardware/algorithmic shifts are needed to reach billion-token windows by 2030.
Key Takeaways
KV cache growth overwhelms memory bandwidth before compute, forcing eviction and tiered paging that silently degrade attention quality across long contexts.
Nvidia’s Rubin CPX uses GDDR7 and disaggregated prefill/generation paths to keep KV cache resident, targeting memory-movement as the bottleneck rather than peak FLOPs.
Experts warn billion-token windows won’t be flat attention over all tokens; expect hierarchical attention, retrieval, and compression layered together.
Algorithmic alternatives like SSMs, Test-Time Training, and Recursive Language Models may be required alongside new hardware to reach practical scale.
Energy, storage, and exabyte-scale KV orchestration remain unsolved economic constraints even if hardware targets are met by 2030.
Hacker News Comment Review
Thin discussion, but commenters question whether raw context size is desirable at all, noting unfiltered long context can hurt rather than help model performance.
Notable Comments
@withinboredom: large codebases are a credible use case, potentially reducing repeated boilerplate generation across a repo.