https://nial.se/blog/less-human-ai-agents-please/
Article
-
Author argues LLMs behave too human: drifting to familiar patterns, ignoring explicit constraints
-
Agents pick wrong languages, add unsolicited changes, and over-explain decisions
-
Calls for more mechanical, instruction-literal agent behavior in coding contexts
Discussion
-
Top commenter: agents constantly ‘improve’ well-specified refactors instead of just doing them
-
Counterpoint: this is transformer architecture’s statistical averaging, not anthropomorphism
-
Debate on whether human-like conversational feel is a feature or a liability for agents
Discuss on HN