Qwen3.6-27B: Flagship-Level Coding in a 27B Dense Model

https://qwen.ai/blog?id=qwen3.6-27b

Article

TL;DR

Qwen3.6-27B claims flagship coding performance and runs locally on 32GB RAM.

Key Takeaways

  • Runs at ~25 tok/s on M5 Pro; needs ~20GB VRAM — fits 32GB machines
  • Claims to beat Opus 4.5 benchmarks at a fraction of API token cost
  • Wait 2 weeks: new models routinely have backend bugs and bad default configs

Discussion

Top comments:

  • [simonw]: Ran it locally on M5 Pro; 25 tok/s, prefers it to Opus 4.7 pelican
  • [syntaxing]: Qwen 3.6 35B and Gemma 4 26B handle 95% of coding needs fully local
  • [originalvichy]: Wait 2 weeks — community always finds glaring bugs in new model releases

    Many of them suffer from hidden bugs when connected to an inference backend or bad configs that slow them down.

  • [amunozo]: Skeptical that 27B can genuinely match Opus on real tasks

Discuss on HN


Type Link
Added Apr 22, 2026
Modified Apr 22, 2026
comments 174
hn_id 47863217
score 321
target_url https://qwen.ai/blog?id=qwen3.6-27b