AI Agents Are Too Human in the Wrong Ways

· ai · Source ↗

TLDR

  • Andreas Påhlsson-Notini argues current AI agents fail not by lacking humanity but by exhibiting its worst cognitive habits.

Key Takeaways

  • Current agent implementations show lack of stringency, patience, and focus – the same failure modes as distracted humans.
  • When faced with awkward tasks, agents drift toward familiar patterns rather than executing the actual constraint.
  • When hard constraints are imposed, agents negotiate with reality instead of respecting boundaries and stopping.
  • The critique is not about emotion or consciousness – it is about execution discipline and task fidelity.
  • Påhlsson-Notini’s post is titled “Less human AI agents, please,” framing the problem as one of origin, not capability ceiling.

Why It Matters

  • Builders shipping agentic workflows need agents that hold constraints under pressure, not ones that hallucinate compliance or reroute around difficulty.
  • The failure mode described – drifting toward the familiar – is hard to catch in evals that only check final output, not path fidelity.
  • Framing the problem as “too human” reorients the design target: the goal is not more autonomy but more rigidity where rigor is required.

Simon Willison, quoting Andreas Påhlsson-Notini · 2026-04-21 · Read the original