Sarvam 105B, the first competitive Indian open source LLM — news
News/2026-03-08-sarvam-105b-the-first-competitive-indian-open-source-llm-news-news
Breaking NewsMar 8, 20264 min read

Sarvam 105B, the first competitive Indian open source LLM — news

Featured:Sarvam AI

Sarvam AI Open-Sources 105B and 30B LLMs, India's First Competitive Indigenous Models

MUMBAI — Sarvam AI has released two new large language models trained from scratch on Indian data: the 105-billion-parameter Sarvam 105B and the 30-billion-parameter Sarvam 30B. The open-source release marks a significant step toward building sovereign AI capabilities in India, with both models emphasizing strong performance on reasoning, programming, agentic tasks and real-world conversational use cases.

The Bangalore-based startup positioned the 105B model, which uses a Mixture-of-Experts (MoE) architecture with 10.3 billion active parameters, as competitive with other open-source and mid-scale frontier models in its class. Sarvam 30B is optimized for real-time deployment and inference efficiency. Both models were trained on datasets focused on Indian languages and contexts, addressing previous criticism that earlier Indian models relied heavily on foreign data and infrastructure.

According to Sarvam AI's official blog post, Sarvam 105B "performs well on reasoning, programming, and agentic tasks across a wide range of benchmarks." The company claims the 30B model delivers 3-6x higher inference throughput compared to Qwen3-30B-A3B while using standard inference stacks such as vLLM. The models are now available on Hugging Face, allowing developers worldwide to access and build upon India's first truly competitive open-source LLMs.

Technical Details and Training Approach

Sarvam 105B adopts a sparse MoE architecture, activating only 10.3 billion parameters during inference despite its total 105 billion parameter count. This design helps balance capability with computational efficiency. The company states both models were trained entirely from scratch domestically, a deliberate shift from its prior work that reportedly involved collaboration with Nvidia and greater reliance on international datasets.

The focus on Indian languages and cultural context is central to the release. Sarvam aims to accelerate development of a "sovereign, voice-first AI ecosystem" tailored to India's linguistic diversity and unique use cases. Early feedback on Hacker News noted the models' potential significance for local AI sovereignty, though some observers questioned the exact inference throughput claims given the use of standard vLLM deployment.

Competitive Positioning

The launch positions Sarvam AI as a leader in India's emerging foundation model ecosystem. Previous Indian efforts have faced criticism for depending on foreign training data and infrastructure. Sarvam-105B directly responds to that critique by emphasizing fully domestically trained parameters and datasets, according to reporting by Business Standard.

In the broader global landscape, the models enter a crowded field of open-source releases ranging from Meta's Llama series to various Chinese and European efforts. Sarvam's emphasis on regional languages and voice-first applications differentiates it from many Western-centric models, potentially offering advantages for South Asian markets and diaspora communities.

Impact on Developers and India's AI Ecosystem

For developers, the open-source availability of Sarvam 105B and 30B provides new building blocks for applications targeting Indian users. The 30B model's optimization for real-time conversational scenarios makes it particularly suitable for customer service, education and voice assistants in multiple Indian languages.

The release strengthens India's sovereign AI ambitions by reducing dependence on foreign models for critical applications. It also signals growing maturity in the country's AI startup scene, with Sarvam AI emerging as one of the few domestic players capable of training frontier-scale models from scratch.

Industry observers see this as an important milestone for local talent retention and technological self-reliance. By open-sourcing the models, Sarvam invites the global developer community to contribute to further improvements, potentially accelerating progress in Indic language AI capabilities.

What's Next

Sarvam AI has not yet detailed a specific timeline for subsequent model releases or fine-tuned variants. The company is expected to focus on expanding the models' capabilities in voice interfaces and domain-specific applications relevant to Indian enterprises and government use cases.

Community contributions on platforms like Hugging Face will likely influence the next phase of development. Early benchmarks and real-world testing by independent researchers will help validate Sarvam's performance claims against other leading open-source models.

As India continues investing in its national AI mission, releases like Sarvam 105B could serve as foundational infrastructure for broader digital public goods initiatives. The company may also explore partnerships with local cloud providers to offer optimized inference services tailored to the Indian market.

Sources

Original Source

sarvam.ai

Comments

No comments yet. Be the first to share your thoughts!