A.X series vs Granite-4.0-H-350M

Side-by-side benchmark comparison across agentic, coding, multimodal, knowledge, reasoning, and math workflows.

Benchmark data for one or both models is coming soon. This page currently shows metadata and pricing where BenchLM has it, and score-level comparisons will populate as public benchmark results land.
Agentic
Coding
Multimodal & Grounded
Reasoning
Knowledge
Instruction Following
Multilingual
Mathematics

A.X series· Granite-4.0-H-350M

Quick Verdict

Benchmark data for A.X series and Granite-4.0-H-350M is coming soon on BenchLM.

BenchLM has partial data for these models, but not enough overlapping benchmark coverage to produce a fair score-level comparison yet.

A.X series has the larger context window at 64K, compared with 32K for Granite-4.0-H-350M.

Operational tradeoffs

PriceN/AFree*
SpeedN/AN/A
TTFTN/AN/A
Context64K32K

Decision framing

BenchLM keeps the benchmark table and the operator tradeoffs on the same page so a better score does not hide a materially slower, pricier, or smaller-context model.

Runtime metrics show N/A when BenchLM does not have a sourced snapshot for that exact model. The scoring rules and freshness policy are documented on the methodology page.

BenchmarkA.X seriesGranite-4.0-H-350M
Agentic
Coming soon
Coding
HumanEval39%
Multimodal & Grounded
Coming soon
Reasoning
BBH33.1%
Knowledge
MMLU35.0%
GPQA24.1%
MMLU-Pro12.1%
Instruction Following
IFEval55.4%
Multilingual
MGSM14.7%
Mathematics
Coming soon
Frequently Asked Questions (3)

Can I compare A.X series and Granite-4.0-H-350M on BenchLM yet?

Not fully yet. BenchLM is tracking both models, but the sourced benchmark breakdown for this comparison is still coming soon.

Why does this comparison show “coming soon”?

BenchLM only shows category winners and benchmark-level calls when we have sourced results that can be compared fairly. For these models, the public benchmark coverage is not complete enough yet.

What data is available for A.X series and Granite-4.0-H-350M today?

Granite-4.0-H-350M: $0.00 input / $0.00 output per 1M tokens Both model pages still include creator, context window, reasoning mode, and other metadata while benchmark coverage fills in.

Last updated: March 31, 2026

Weekly LLM Benchmark Digest

Get notified when new models drop, benchmark scores change, or the leaderboard shifts. One email per week.

Free. No spam. Unsubscribe anytime. We only store derived location metadata for consent routing.