LLMs Are Not a Higher Level of Abstraction

· ai coding · Source ↗

TLDR

  • Blog post argues LLMs break the deterministic abstraction ladder (binary→assembly→C→Python) because they output probability distributions, not fixed artifacts.

Key Takeaways

  • Each prior abstraction layer satisfied f(x) -> y: same source input always produces the same binary artifact.
  • LLMs produce f(x) -> P(y | z1 | z2 | ... zN): you may get what you asked for plus unasked-for, potentially dangerous additions.
  • Test suites checking only for the presence of y will silently pass even when harmful z artifacts (credential exposure, open FTP) are also present.
  • The author frames this not as a limitation to work around but as a categorical difference that invalidates the abstraction metaphor entirely.

Hacker News Comment Review

  • Commenters largely agreed on the probabilistic-vs-deterministic point but disputed the framing: LLMs with fixed seeds are deterministic, functioning as universal function approximators rather than truly stochastic systems.
  • The deeper disagreement is cultural: critics note that LLM-enthusiast builders often accept non-determinism as a tradeoff, making the technical argument miss its target audience.
  • The compiler analogy was challenged: C and Python across different compilers already produce varying machine code, so the f(x)->y purity claim for prior abstraction layers is overstated.

Notable Comments

  • @conorbergin: LLMs are deterministic under fixed conditions; randomness is injected deliberately, not inherent.
  • @bigstrat2003: “They are happy to hand off the thinking to a third party, even if it will give wrong answers they don’t notice.”

Original | Discuss on HN