Qwen3.6-27B: Flagship-Level Coding in a 27B Dense Model

https://qwen.ai/blog?id=qwen3.6-27b

Article

TL;DR

Qwen’s 27B dense model claims Opus-class coding performance at ~20GB, runnable on consumer hardware.

Key Takeaways

  • Runs at 25 tok/s on M5 Pro; Q4_K_M fits in 24GB VRAM with 91k context
  • Local models closing gap with frontier APIs threatens Anthropic/OpenAI pricing power
  • Benchmark skepticism warranted — two orders of magnitude smaller than rumored Opus size

Discussion

Top comments:

  • [simonw]: Ran it on M5 Pro 128GB; excellent pelican SVG, better than Opus 4.7
  • [jedisct1]: Found 8/10 security bugs on small codebases overnight, zero false positives
  • [lgessler]: Hard to believe a 27B model matches Opus rumored at 100x larger
  • [jameson]: Open-source models at fraction of Anthropic pricing erodes their competitive moat

Discuss on HN


Type Link
Added Apr 22, 2026
Modified Apr 22, 2026
comments 250
hn_id 47863217
score 512
target_url https://qwen.ai/blog?id=qwen3.6-27b