GPT-5.5 Bio Bug Bounty

· ai · Source ↗

TLDR

  • OpenAI is running a paid red-teaming challenge on GPT-5.5 targeting biosafety jailbreaks, with up to $25,000 for a universal exploit.

Key Takeaways

  • The top prize ($25,000) requires finding a single universal jailbreak that clears all five biosafety test questions.
  • Participants must have a ChatGPT account and sign an NDA before accessing program details.
  • Applications ask for a proposed jailbreak approach before revealing what the actual test questions are.
  • Access is invite-only: OpenAI will extend invitations to a vetted list of trusted bio red-teamers.

Hacker News Comment Review

  • Commenters flagged a structural problem: applicants must describe their jailbreak approach before seeing the questions, making a coherent proposal nearly impossible.
  • The NDA requirement drew skepticism as a silencing mechanism that prevents public disclosure of findings, undermining the credibility of the program as genuine security research.
  • The winner-takes-all payout structure means only one researcher collects the $25,000 regardless of how many valid exploits are independently discovered, which commenters called a scam framing.

Notable Comments

  • @applfanboysbgon: “Even if 100 people find ‘bugs’, they will only pay out to one person” – winner-takes-all framing undercuts the bounty premise entirely.

Original | Discuss on HN