BenchLM is tracking Sarvam 30B by Sarvam. Some benchmark data is visible, but not enough non-generated coverage is available for a leaderboard rank yet.
BenchLM is tracking Sarvam 30B, but this profile is currently excluded from the public leaderboard because it still lacks enough verified benchmark coverage to rank safely. Only verified public benchmark rows appear below.
Sarvam 30B is a open weight model with a 64K token context window. It uses explicit chain-of-thought reasoning, which typically improves performance on math and complex reasoning tasks at the cost of higher latency and token usage.
This profile currently has 11 of 126 tracked benchmarks. BenchLM only exposes verified benchmark rows publicly, so missing categories stay blank until a sourced evaluation is available.
Its strongest category is Mathematics (#30). This performance profile makes it particularly strong for mathematical reasoning, scientific computing, and quantitative analysis.
Provider
SarvamSource Type
Open WeightReasoning
ReasoningContext Window
64K
Model Status
Current
Release Date
Mar 6, 2026Overall Score
Unranked
Pricing
$0.00 / $0.00
Input / output per 1M
Runtime
N/A
Latency unavailable
BenchLM is still missing enough verified benchmark coverage to rank this model across the public leaderboard. Only verified public benchmark rows are shown below.
LiveCodeBench v6 2026 · Quarterly refresh · updated April 3, 2026
SWE-bench Verified 2024 · Annual refresh · updated April 3, 2026
HMMT Feb 2025 2025 · Quarterly refresh · updated April 3, 2026
HMMT Nov 2025 2025 · Quarterly refresh · updated April 3, 2026
Sarvam 30B has 11 verified benchmark scores on BenchLM, but it does not yet have enough coverage to receive a global overall rank.
Sarvam 30B has visible benchmark coverage in knowledge and understanding, but BenchLM does not currently assign it a global category rank there.
Sarvam 30B has visible benchmark coverage in coding and programming, but BenchLM does not currently assign it a global category rank there.
Sarvam 30B ranks #30 out of 104 models in mathematics benchmarks with an average score of 86.5. There are stronger options in this category.
Sarvam 30B has visible benchmark coverage in reasoning and logic, but BenchLM does not currently assign it a global category rank there.
Sarvam 30B has visible benchmark coverage in agentic tool use and computer tasks, but BenchLM does not currently assign it a global category rank there.
Yes, Sarvam 30B is an open weight model created by Sarvam, meaning it can be downloaded and run locally or fine-tuned for specific use cases.
Not yet. Sarvam 30B currently has 11 verified benchmark scores out of the 126 benchmarks BenchLM tracks. BenchLM only exposes verified public benchmark rows, so missing categories stay blank until a sourced evaluation is available.
Sarvam 30B has a context window of 64K, which determines how much text it can process in a single interaction.
New model releases, benchmark scores, and leaderboard changes. Every Friday.
Free. Your signup is stored with a derived country code for compliance routing.