Qwen3.6-27B: Flagship-Level Coding in a 27B Dense Model
Article
TL;DR
Qwen’s 27B dense model claims Opus-class coding performance running on consumer hardware.
Key Takeaways
- Q4_K_M quant runs at ~16.8GB on M5 Pro; fits 32GB RAM machines
- Local frontier-quality coding narrows the moat of paid API providers significantly
- Benchmark comparisons to Opus are disputed; wait 2 weeks for community verification
Discussion
Top comments:
-
[simonw]: Runs well on M5 Pro 128GB; pelican SVG better than Opus 4.7
The pelican is excellent for a 16.8GB quantized local model… I like it better than the pelican I got from Opus 4.7 the other day.
- [jedisct1]: Found 8/10 security bugs in benchmark with zero false positives
- [originalvichy]: Wait two weeks; early models often have hidden backend bugs