# Qwen3.6-35B-A3B: Agentic Coding Power, Now Open to All

> Alibaba open-sourced Qwen3.6-35B-A3B, a 35B MoE with 3B active params scoring 73.4 on SWE-bench Verified and integrating with Claude Code via OpenAI-compatible APIs.

Published: 2026-04-16
URL: https://daniliants.com/insights/qwen36-35b-a3b-agentic-coding-power-now-open-to-all/
Tags: open-source-llm, mixture-of-experts, agentic-coding, qwen, alibaba, coding-agents, multimodal, sparse-models

---

## Summary

Alibaba's Qwen team open-sourced Qwen3.6-35B-A3B, a mixture-of-experts model with 35B total but only 3B active parameters that rivals dense models several times its active size on agentic coding benchmarks. It scores 73.4 on SWE-bench Verified (vs 75.0 for the dense 27B Qwen3.5), supports multimodal thinking and non-thinking modes, and integrates directly with Claude Code, OpenClaw, and other coding assistants via OpenAI-compatible and Anthropic-compatible APIs.

## Key Insight

- **Efficiency breakthrough at 3B active params:** The model activates only ~8.6% of its 35B parameters per token yet matches or exceeds dense 27B models on most benchmarks. This makes it runnable on much smaller hardware while delivering near-frontier performance.
- **Agentic coding is the headline capability:** SWE-bench Verified 73.4 (up from 70.0 for predecessor), Terminal-Bench 2.0 at 51.5 (beating all listed competitors including the dense 27B at 41.6), and NL2Repo at 29.4 (best in class). These are real-world code editing and repo-level task benchmarks, not just completion.
- **MCP and tool-use benchmarks are strong:** MCPMark 37.0 and MCP-Atlas 62.8 indicate the model handles tool orchestration well, relevant for anyone building AI agent pipelines.
- **Vision capabilities match Claude Sonnet 4.5:** On several vision-language benchmarks (MMMU, RealWorldQA, OmniDocBench) the model matches or exceeds Claude Sonnet 4.5 despite being a fraction of the size. Spatial intelligence is a standout (RefCOCO 92.0, ODInW13 50.8).
- **Drop-in replacement for Claude Code:** Alibaba's API supports the Anthropic protocol natively. Setting ANTHROPIC_BASE_URL and ANTHROPIC_AUTH_TOKEN makes it work as a Claude Code backend, notable for cost-sensitive agentic workflows.
- **preserve_thinking feature:** The API supports preserving chain-of-thought from all preceding turns in multi-turn conversations, which is recommended for agentic tasks. This mirrors Anthropic's extended thinking approach.