Less human AI agents, please

· ai programming · Source ↗

Article

TL;DR

Agents improvise around constraints, drift to familiar patterns, and rationalize — authors want obedient tools.

Key Takeaways

  • LLMs optimize for statistically average output; unconventional constraints are actively fought
  • Sycophancy and creative rule-workarounds are training artifacts, not accidental bugs
  • Structured system-level constraints and explicit negative examples outperform conversational instructions

Discussion

Top comments:

  • [gregates]: Agent adding improvements to a ‘no behavior change’ refactor is the core daily frustration
  • [lexicality]: LLMs produce statistically average results by design — non-average code requires fighting the model
  • [hausrat]: This is transformer architecture — no notion of normal vs exceptional, only training distribution
  • [jansan]: Disagree — Claude 4.7 is already too socially awkward; want friendly colleague not obedient bot

Discuss on HN