Over-editing refers to a model modifying code beyond what is necessary
Article
TL;DR
AI coding agents routinely rewrite code beyond the task, increasing review burden and introducing bugs.
Key Takeaways
- Training cross-entropy loss rewards verbose, low-perplexity outputs, causing over-editing
- Prompting ‘make minimal changes’ reduces but doesn’t eliminate unnecessary rewrites
- Every extra line is a potential bug; 80% of generated code can often be deleted safely
Discussion
Top comments:
- [hathawsh]: Uses project-specific skill files so Claude rarely repeats over-editing mistakes
-
[janalsncm]: Cross-entropy loss steers toward low-surprise verbose outputs — a training artifact
Cross entropy loss steers towards garden path sentences. Using a paragraph to say something any person could say with a sentence, or even a few precise words.
- [collimarco]: AI changes 10 files and adds hundreds of lines for a 3-line fix
- [graybeardhacker]: Uses git add -p and reviews every diff; treats CC as assistant not replacement