The model performs as well or better than GPT-3.5 in various Hindi tasks while retaining its efficiency in English.
Sarvam AI, an Indian AI startup, has launched OpenHathi-Hi-v0.1, the first Hindi large language model (LLM) in its OpenHathi series. This model is based on Meta AI’s Llama2-7B architecture and reportedly matches the performance of GPT-3.5 for Indic languages.
The model incorporates a 48,000-token extension to Llama2-7B’s tokeniser and is trained through a two-stage process. Initially, it undergoes embedding alignment to align Hindi embeddings that are randomly initialized. The next stage involves bilingual language modeling, training the model for cross-lingual attention across tokens.
Sarvam AI claims that their model performs as well or better than GPT-3.5 in various Hindi tasks while retaining its efficiency in English. They have evaluated the model’s effectiveness in practical tasks beyond standard Natural Language Generation (NLG) applications. Sarvam AI has collaborated with KissanAI to refine their base model using conversational data collected from interactions between a GPT-based bot and farmers in different languages.
The company explained their approach to enhancing Hindi capabilities in Llama-2. They reduced the fertility score of the tokeniser for Hindi text, improving training and inference efficiency. They developed a new tokeniser with a 48K vocabulary by merging a sentence-piece tokeniser trained on the Sangraha corpus from AI4 Bharat with Llama2’s tokeniser.
Sarvam AI was founded in July 2023 by Vivek Raghavan and Pratyush Kumar and recently raised $41 million in funding led by Lightspeed Ventures, with contributions from Peak XV Partners and Khosla Ventures.