What's Wrong with AI?

· ai · Source ↗

TLDR

  • Blog post catalogues concrete harms of current LLMs and image generators: energy, water, surveillance, copyright, and cognitive dependency.

Key Takeaways

  • A ChatGPT query uses ~10x the energy of a traditional search; most AI data centre power still draws from fossil fuels even when nominally renewable.
  • AI water consumption is projected at 4-7 billion cubic metres annually by 2027, equivalent to 24 million people, concentrated in already water-stressed regions.
  • Data centres produce a construction jobs spike but very few permanent local jobs, contradicting politician and CEO promises.
  • Copyright training data: GPT-4 trained on 1+ petabyte of data; Anthropic allegedly destroyed millions of books to train Claude, with no creator compensation.
  • “AI psychosis” cases and documented student cognitive underperformance are cited as emerging individual-level harms beyond systemic ones.

Hacker News Comment Review

  • The renewable energy additionality argument drew pushback: one commenter argued increased demand does drive new renewable buildout over time, calling the essay’s framing reductive about power demand being inherently bad.
  • Commenters largely found the piece more grounded than typical AI criticism but flagged that agentic coding tools specifically raise both ethical and practical skill-atrophy concerns for new developers.
  • The arms race framing was contested: the dynamic driving AI adoption was characterized as an arms race rather than a game of chicken, a distinction with different policy implications.

Notable Comments

  • @TimByte: “usefulness doesn’t automatically justify unlimited deployment, opaque training practices or turning every public service and workplace into an experiment”
  • @burlesona: reframes the adoption dynamic as an arms race, not a game of chicken, implying coordination failures are structural.

Original | Discuss on HN