Why AI companies want you to be afraid of them

· ai · Source ↗

TLDR

  • AI companies publicly declare their own models too dangerous to release, a pattern critics say inflates perceived power to deflect regulation and boost valuations.

Key Takeaways

  • Claude Mythos was declared too risky to release; independent experts flagged missing false positive rates as a key credibility gap in the vulnerability claims.
  • OpenAI announced GPT-2 too dangerous in 2019, released it months later; Altman later called those fears misplaced, yet the playbook repeats.
  • Anthropic partnered with 40+ organizations to patch Mythos-found vulnerabilities while avoiding any benchmark comparison against existing enterprise security tools.
  • Apocalypse framing positions incumbents as indispensable safety stewards and makes regulatory bodies feel outmatched and structurally irrelevant.
  • Anthropic, OpenAI, and Google DeepMind all dropped earlier safety commitments or pledges as commercial pressure toward public listings mounted.

Hacker News Comment Review

  • Vulnerability researchers on HN broadly believe frontier models will unlock a flood of critical CVEs; the dispute is magnitude, not whether Mythos-tier capability matters.
  • Multiple commenters argue the fear narrative suppresses near-term accountability: environmental cost, labor disruption, and social harm get obscured behind extinction-level hypotheticals.
  • A thread frames apocalypse marketing as internal expectation management: if AI might end civilization, asking why sprint velocity did not improve feels petty.

Notable Comments

  • @InputName: “In lieu of a technological moat, companies search for regulatory capture.”
  • @boh: argues AI is software inert without human intention; cites Claude Code wiping a production database despite explicit instructions as counterevidence to autonomy claims.

Original | Discuss on HN