LLM research on Hacker News is drying up

· ai ai-agents web · Source ↗

TLDR

  • Dylan Castillo used Claude and BigQuery to confirm arXiv paper share on HN has dropped sharply in recent months after peaking during the LLM wave.

Key Takeaways

  • The 2019 HN arXiv peak was deep learning: 41% of the top 100 upvoted papers were DL-focused, led by MuZero, EfficientNet, and XLNet.
  • 2023-2026 was even more concentrated: 59% of top 100 upvoted arXiv papers were LLM or AI topics.
  • DeepSeek-R1 (1,351 pts) and BitNet b1.58 (1,040 pts) are flagged as likely durable papers; DeepSeek-R1 for open RL-based reasoning, BitNet for low-bit inference.
  • The LK-99 room-temperature superconductor cluster (2,408 + 1,690 pts) is called a landmark in meta-science: open-science at wire speed and crowdsourced replication, not physics.
  • Generative Agents “Smallville” paper (391 pts) and Differential Transformer (562 pts) round out the predicted durable set for architecture and agent design patterns.

Hacker News Comment Review

  • No substantive HN discussion yet.

Original | Discuss on HN