Sarvam-105B: India's Open-Source Answer to GPT Just Launched at the AI Summit

India's Homegrown LLM Arrives
Indian AI startup Sarvam AI launched two large language models at the India AI Impact Summit in New Delhi: Sarvam-30B and Sarvam-105B. Both will be released as open source — a strategic choice designed to drive adoption among developers, enterprises, and government agencies seeking alternatives to foreign AI systems.
The Model Specs
- Sarvam-30B — Pre-trained on 16 trillion tokens, 32,000-token context window
- Sarvam-105B — 128,000-token context window, handling much longer documents and conversations
- Both use mixture-of-experts (MoE) architecture, activating only a fraction of parameters at a time to dramatically reduce compute costs
- Built entirely from scratch without relying on external datasets
- Supports 10 Indian languages
The Full Suite
Beyond the LLMs, the lineup includes a text-to-speech model, a speech-to-text model, and a vision model for document parsing — a full stack for Indian language AI applications.
Strategic Context
The launch aligns directly with India's push to reduce reliance on foreign AI platforms. Founded in 2023, Sarvam has raised more than $50 million from Lightspeed Venture Partners, Khosla Ventures, and Peak XV Partners (formerly Sequoia Capital India).
The Bottom Line
Sarvam-105B: 105 billion parameters, 128K context, mixture-of-experts, open source, built from scratch in 10 Indian languages. This is the "Atmanirbhar AI" moment made real — and launching it at the India AI Summit, in front of global leaders and investors, is a statement of intent.