Sarvam 30B Uncensored Variant Released via Abliteration
Key Facts
- What: Community member "aoxo" released an uncensored version of Sarvam AI's 30B parameter model using the abliteration technique.
- When: Released approximately one week after Sarvam AI open-sourced its original 30B and 105B models.
- Where: Available for download on Hugging Face at https://huggingface.co/aoxo/sarvam-30b-uncensored.
- Method: Abliteration, a technique that surgically removes safety alignment from model weights.
- Context: Sarvam AI's original models were trained on 16 trillion tokens spanning code, web data, math, and multilingual content.
Lead paragraph
A community developer has released an "uncensored" version of Sarvam AI's recently open-sourced 30B parameter language model by applying the abliteration technique to remove its built-in safety alignments. The modified model, hosted by user "aoxo" on Hugging Face, appeared just one week after Indian AI company Sarvam AI made its 30B and 105B models publicly available. The release highlights the growing trend of modifying open-source models to eliminate refusals and content restrictions.
Body
Sarvam AI, an Indian artificial intelligence company, open-sourced its 30B and 105B parameter models earlier this month. According to the company's official blog, the 30B model was trained on 16 trillion tokens, while the larger 105B variant used 12 trillion tokens. The pre-training data included code, general web data, specialized knowledge corpora, mathematics, and multilingual content, with the final mixture optimized for reasoning, factual grounding, and software capabilities.
The new uncensored variant was announced on Reddit's r/artificial subreddit by user Available-Deer1723, who noted that "it's only been a week since release and the devs are at it again," linking to the Hugging Face repository. Abliteration has emerged as a popular method in the open-source AI community for removing safety training from large language models without requiring full retraining. The technique essentially identifies and neutralizes the specific directions in the model's activation space associated with refusal behaviors.
Similar abliteration-based uncensored models have gained attention recently, including modifications of other 30B-scale models that can run locally on high-end consumer GPUs like the RTX 4090. These modified models are often promoted for their ability to operate without typical content restrictions while maintaining native tool-calling capabilities and other advanced features.
The original Sarvam models have drawn interest for being trained from scratch by an Indian company, offering an alternative to dominant Western and Chinese models like those from OpenAI and Google. Coverage in The Times of India highlighted how these open-source releases differ from proprietary systems like ChatGPT and Gemini, particularly in terms of accessibility and customization potential.
Impact
The rapid appearance of an uncensored Sarvam 30B variant demonstrates how quickly the open-source AI community can modify and redistribute foundation models once they are released with open weights. This development provides developers and researchers with greater flexibility but also raises questions about the effectiveness and longevity of safety alignment techniques when models are fully open-sourced.
For users with sufficient hardware, the availability of a 30B-scale uncensored model offers new options for local deployment. The original Sarvam 30B was positioned as a capable reasoning and coding model, and the modified version removes typical content filters that might otherwise limit certain use cases.
The release adds to the growing ecosystem of Indian AI development, showcasing both the innovation coming from companies like Sarvam AI and the active modification culture within the global open-source community.
What's Next
The Hugging Face repository for aoxo/sarvam-30b-uncensored will likely see community testing and potential further modifications. As more organizations release large open-weight models, the speed at which alignment removal techniques are applied continues to accelerate. Sarvam AI has not yet issued an official statement regarding the uncensored variant of its 30B model.
The trend toward abliteration and similar techniques suggests that safety alignment in open-source models may increasingly depend on post-deployment safeguards rather than baked-in training constraints.
Sources
- Reddit r/artificial: Sarvam 30B Uncensored via Abliteration
- Hugging Face: aoxo/sarvam-30b-uncensored
- Sarvam AI Blog: Open-Sourcing Sarvam 30B and 105B
- The Times of India: Sarvam 30B and 105B AI models are now open-source
- Reddit r/LocalLLaMA: New OpenSource Models Available—Sarvam 30B and 105B
All technical specifications, pricing, and benchmark data in this article are sourced directly from official announcements. Competitor comparisons use publicly available data at time of publication. We update our coverage as new information becomes available.

