OpenAI is nothing without its people

· policy · Source ↗

TLDR

  • George Hotz argues OpenAI should publish its research openly, rejecting closed science as historically ineffective and structurally dangerous.

Key Takeaways

  • Hotz distinguishes between sharing model weights (justifiably closed, given training costs) and sharing research, architecture, and techniques (unjustifiably closed).
  • The real risk is not powerful individuals like Altman or Musk, but Molochian coordination failures: millions of small harmful decisions with no counterforce.
  • Hotz rejects UBI and democratic oversight as meaningful safeguards, arguing power flows through democratic systems rather than originating from them.
  • The correct path is sharing the technology itself, not subscription access to it; access that can be revoked is described as feudalism, not openness.
  • Science credits the publisher, not the discoverer; OpenAI researchers working at a closed lab forfeit both historical credit and real impact.

Why It Matters

  • Hotz draws a hard line between open research (share it) and open weights (optional), giving labs a concrete, defensible middle path.
  • The feudalism framing reframes the AI access debate: cloud API access is not democratization if the provider retains revocation rights.
  • Written as a direct response to Sam Altman’s blog post, making it a rare public exchange between two prominent figures on AI governance terms.

the singularity is nearer · 2026-04-10 · Read the original