Release: llm 0.31

· ai · Source ↗

TLDR

  • llm 0.31 adds GPT-5.5 support, verbosity control, image detail options, and async registration for extra-openai-models.yaml entries.

Key Takeaways

  • Run GPT-5.5 from the command line with llm -m gpt-5.5.
  • Control output length with -o verbosity low|medium|high for GPT-5+ models.
  • Set image attachment detail with -o image_detail low|high|auto; GPT-5.4 and 5.5 also accept original.
  • Models defined in extra-openai-models.yaml are now automatically registered as asynchronous.

Why It Matters

  • GPT-5.5 access lands in the CLI immediately, so builders using llm in scripts or automation can adopt the new model without API client updates.
  • The verbosity flag gives direct control over token spend vs. response depth on supported models, a concrete cost and quality lever.
  • Async registration for custom model lists reduces manual wiring when extending llm with non-default OpenAI-compatible endpoints.

Simon Willison · 2026-04-24 · Read the original