Over-editing refers to a model modifying code beyond what is necessary
https://nrehiew.github.io/blog/minimal_editing/Article
TL;DR
LLMs systematically rewrite more code than the task requires, creating review burden and tech debt.
Key Takeaways
- Prompting for ‘minimal changes’ measurably reduces unnecessary rewrites
- Over-editing hides bugs, breaks blame history, and makes reviews harder to trust
- Study prompts are 8 months old — top commenters note recent models have improved significantly
Discussion
Top comments:
- [collimarco]: AI changes 10 files for fixes solvable in 3 lines — multiplies tech debt
-
[foo12bar]: Models hide failures by swallowing exceptions — likely trained to avoid obvious errors
I suspect AI’s learned to do this in order to game the system. Bailing out with an exception is an obvious failure and will be penalized, but hiding a potential issue can sometimes be regarded as success.
- [hathawsh]: Teaching Claude via project skill files nearly eliminates repeat mistakes
- [janalsncm]: Verbosity is a training artifact: cross-entropy loss rewards low-perplexity garden-path prose
- [jstanley]: Over-editing vs under-editing is a spectrum — depends how ossified your codebase should be
| Type | Link |
| Added | Apr 23, 2026 |
| Modified | Apr 23, 2026 |
| comments | 155 |
| hn_id | 47866913 |
| score | 271 |
| target_url | https://nrehiew.github.io/blog/minimal_editing/ |