Skip to main content

Qwen3.5 397B

AlibabaCurrentReleased Feb 16, 2026
Overall Score
66Prov. #38 of 110Verified #10 of 14
Arena Elo
1400
Categories Ranked
8of 8
Price (1M tokens)
$0 in / $0 out
Speed
96tok/s
Context
128K
Open WeightNon-Reasoning
Confidence
base

According to BenchLM.ai, Qwen3.5 397B ranks #38 out of 110 models on the provisional leaderboard with an overall score of 66/100. It also ranks #10 out of 14 on the verified leaderboard. While not a frontier model, it offers specific advantages depending on the use case.

Qwen3.5 397B is a open weight model with a 128K token context window. It processes queries without explicit chain-of-thought reasoning, offering faster response times and lower token usage.

Qwen3.5 397B sits inside the Qwen3.5 397B family alongside Qwen3.5 397B (Reasoning). This profile currently has 36 of 152 tracked benchmarks. BenchLM only exposes non-generated benchmark rows publicly, so missing categories stay blank until a sourced evaluation is available.

Its strongest category is Instruction Following (#22), while its weakest is Multimodal & Grounded (#48). This performance profile makes it a well-rounded choice across a range of tasks.

Ranking Distribution

Category rank across 8 benchmark categories — sorted by best rank

Category Performance

Scores across all benchmark categories (0-100 scale)

Category Breakdown

Agentic

#37
59.1/ 100
Weight: 22%11 benchmarks
Terminal-Bench 2.0BrowseCompOSWorld-VerifiedGAIATAU-benchWebArena

Coding

#29
68.7/ 100
Weight: 20%3 benchmarks
SWE-bench VerifiedLiveCodeBenchSWE-bench ProSWE-RebenchSciCode

Reasoning

#40
59.4/ 100
Weight: 17%2 benchmarks
MuSRLongBench v2MRCRv2ARC-AGI-2

Knowledge

#30
72.6/ 100
Weight: 12%6 benchmarks
GPQASuperGPQAMMLU-ProHLEFrontierScienceSimpleQA

Math

#31
73.3/ 100
Weight: 5%5 benchmarks
AIME 2025BRUMO 2025MATH-500FrontierMath

Multilingual

#24
74.3/ 100
Weight: 7%2 benchmarks
MGSMMMLU-ProX

Multimodal

#48
64.1/ 100
Weight: 12%6 benchmarks
MMMU-ProOfficeQA Pro

Inst. Following

#22
83.0/ 100
Weight: 5%1 benchmark
IFEvalIFBench

Chatbot Arena Performance

Text Overall1400

Benchmark Details

Only benchmark rows with an attached exact-source record are shown here. Source-unverified manual rows and generated rows are hidden from model pages.

Qwen3.5 397B Family

Base entry

Frequently Asked Questions

How does Qwen3.5 397B perform overall in AI benchmarks?

Qwen3.5 397B currently ranks #38 out of 110 models on BenchLM's provisional leaderboard with an overall score of 66. It also ranks #10 out of 14 on the verified leaderboard. It is created by Alibaba and features a 128K context window.

Is Qwen3.5 397B good for knowledge and understanding?

Qwen3.5 397B ranks #30 out of 110 models in knowledge and understanding benchmarks with an average score of 72.6. There are stronger options in this category.

Is Qwen3.5 397B good for coding and programming?

Qwen3.5 397B ranks #29 out of 110 models in coding and programming benchmarks with an average score of 68.7. There are stronger options in this category.

Is Qwen3.5 397B good for mathematics?

Qwen3.5 397B ranks #31 out of 110 models in mathematics benchmarks with an average score of 73.3. There are stronger options in this category.

Is Qwen3.5 397B good for reasoning and logic?

Qwen3.5 397B ranks #40 out of 110 models in reasoning and logic benchmarks with an average score of 59.4. There are stronger options in this category.

Is Qwen3.5 397B good for agentic tool use and computer tasks?

Qwen3.5 397B ranks #37 out of 110 models in agentic tool use and computer tasks benchmarks with an average score of 59.1. There are stronger options in this category.

Is Qwen3.5 397B good for multimodal and grounded tasks?

Qwen3.5 397B ranks #48 out of 110 models in multimodal and grounded tasks benchmarks with an average score of 64.1. There are stronger options in this category.

Is Qwen3.5 397B good for instruction following?

Qwen3.5 397B ranks #22 out of 110 models in instruction following benchmarks with an average score of 83. There are stronger options in this category.

Is Qwen3.5 397B good for multilingual tasks?

Qwen3.5 397B ranks #24 out of 110 models in multilingual tasks benchmarks with an average score of 74.3. There are stronger options in this category.

Is Qwen3.5 397B open source?

Yes, Qwen3.5 397B is an open weight model created by Alibaba, meaning it can be downloaded and run locally or fine-tuned for specific use cases.

Which sibling models are related to Qwen3.5 397B?

Qwen3.5 397B belongs to the Qwen3.5 397B family. Related variants on BenchLM include Qwen3.5 397B (Reasoning).

Does Qwen3.5 397B have full benchmark coverage on BenchLM?

Not yet. Qwen3.5 397B currently has 36 published benchmark scores out of the 152 benchmarks BenchLM tracks. BenchLM only exposes non-generated public benchmark rows, so missing categories stay blank until a sourced evaluation is available.

What is the context window size of Qwen3.5 397B?

Qwen3.5 397B has a context window of 128K, which determines how much text it can process in a single interaction.

Last updated: April 20, 2026 · Runtime metrics stay blank until BenchLM has a sourced snapshot.

Don't miss the next GPT moment

Which models moved up, what’s new, and what it costs. One email a week, 3-min read.

Free. One email per week.