Qwen3.5-122B-A10B
According to BenchLM.ai, Qwen3.5-122B-A10B ranks #39 out of 115 models on the provisional leaderboard with an overall score of 65/100. It also ranks #8 out of 23 on the verified leaderboard. While not a frontier model, it offers specific advantages depending on the use case.
Qwen3.5-122B-A10B is a open weight model with a 262K token context window. It uses explicit chain-of-thought reasoning, which typically improves performance on math and complex reasoning tasks at the cost of higher latency and token usage.
This profile currently has 15 of 193 tracked benchmarks. BenchLM only exposes non-generated benchmark rows publicly, so missing categories stay blank until a sourced evaluation is available.
Its strongest category is Instruction Following (#14), while its weakest is Multimodal & Grounded (#43). This performance profile makes it a well-rounded choice across a range of tasks.
Ranking Distribution
Category rank across 7 benchmark categories — sorted by best rank
Category Performance
Scores across all benchmark categories (0-100 scale)
Category Breakdown
Agentic
#32Coding
Reasoning
Knowledge
#17Math
Multilingual
#25Multimodal
#43Inst. Following
#14Chatbot Arena Performance
Benchmark Details
Only benchmark rows with an attached exact-source record are shown here. Source-unverified manual rows and generated rows are hidden from model pages.
Compare This Model
See how Qwen3.5-122B-A10B stacks up against similar models
Frequently Asked Questions
How does Qwen3.5-122B-A10B perform overall in AI benchmarks?
Qwen3.5-122B-A10B currently ranks #39 out of 115 models on BenchLM's provisional leaderboard with an overall score of 65. It also ranks #8 out of 23 on the verified leaderboard. It is created by Alibaba and features a 262K context window.
Is Qwen3.5-122B-A10B good for knowledge and understanding?
Qwen3.5-122B-A10B ranks #17 out of 115 models in knowledge and understanding benchmarks with an average score of 80.9. There are stronger options in this category.
Is Qwen3.5-122B-A10B good for coding and programming?
Qwen3.5-122B-A10B has visible benchmark coverage in coding and programming, but BenchLM does not currently assign it a global category rank there.
Is Qwen3.5-122B-A10B good for reasoning and logic?
Qwen3.5-122B-A10B has visible benchmark coverage in reasoning and logic, but BenchLM does not currently assign it a global category rank there.
Is Qwen3.5-122B-A10B good for agentic tool use and computer tasks?
Qwen3.5-122B-A10B ranks #32 out of 115 models in agentic tool use and computer tasks benchmarks with an average score of 58.5. There are stronger options in this category.
Is Qwen3.5-122B-A10B good for multimodal and grounded tasks?
Qwen3.5-122B-A10B ranks #43 out of 115 models in multimodal and grounded tasks benchmarks with an average score of 60.7. There are stronger options in this category.
Is Qwen3.5-122B-A10B good for instruction following?
Qwen3.5-122B-A10B ranks #14 out of 115 models in instruction following benchmarks with an average score of 89.2. There are stronger options in this category.
Is Qwen3.5-122B-A10B good for multilingual tasks?
Qwen3.5-122B-A10B ranks #25 out of 115 models in multilingual tasks benchmarks with an average score of 74.1. There are stronger options in this category.
Is Qwen3.5-122B-A10B open source?
Yes, Qwen3.5-122B-A10B is an open weight model created by Alibaba, meaning it can be downloaded and run locally or fine-tuned for specific use cases.
Does Qwen3.5-122B-A10B have full benchmark coverage on BenchLM?
Not yet. Qwen3.5-122B-A10B currently has 15 published benchmark scores out of the 193 benchmarks BenchLM tracks. BenchLM only exposes non-generated public benchmark rows, so missing categories stay blank until a sourced evaluation is available.
What is the context window size of Qwen3.5-122B-A10B?
Qwen3.5-122B-A10B has a context window of 262K, which determines how much text it can process in a single interaction.
Related Resources
Don't miss the next GPT moment
Which models moved up, what’s new, and what it costs. One email a week, 3-min read.
Free. One email per week.