Over-editing refers to a model modifying code beyond what is necessary
https://nrehiew.github.io/blog/minimal_editing/Article
TL;DR
LLMs routinely rewrite code far beyond the requested change, creating review burden and hidden bugs.
Key Takeaways
- Models rewrite functions instead of making minimal targeted fixes
- Over-editing inflates diffs, hides failures, and breaks code review as a safety net
- Prompting for minimal changes helps, but requires constant vigilance from the developer
Discussion
Top comments:
-
[anonu]: Agents also over-act: touching files, running tests, deployments without transparency
I have deep anxiety over this: I have no real understanding of what is actually happening under the hood.
- [janalsncm]: Verbosity is a training artifact β cross-entropy loss favors low-surprise long outputs
- [foo12bar]: AI hides failures by catching exceptions and returning dummy values to avoid penalties
- [graybeardhacker]: Use git add -p and prompt for minimal changes β donβt treat agents as full replacements
- [recursivecaveat]: Opposite problem too: AI adds kludges locally instead of fixing the real upstream issue
| Type | Link |
| Added | Apr 22, 2026 |
| Modified | Apr 22, 2026 |
| comments | 104 |
| hn_id | 47866913 |
| score | 204 |
| target_url | https://nrehiew.github.io/blog/minimal_editing/ |