https://qwen.ai/blog?id=qwen3.6-35b-a3b
Article
-
Qwen releases 35B MoE model with only 3B active params for coding.
-
Open weights; optimized for agentic coding workflows.
-
Efficient inference footprint despite large total parameter count.
-
Follows previous Qwen3.6 series; other sizes not yet released.
Discussion
-
Unsloth already published GGUF quantizations; HF weights available immediately.
-
Community relieved Qwen still ships open weights after internal team departures/restrictions.
-
Debate on local vs. cloud: banking/healthcare orgs cited as key open-weight use case.
-
Hardware questions prominent; 24GB VRAM tight for Q4, 36GB Mac limiting context.
Discuss on HN