We present a novel approach to data preparation for developing multilingual Indic large language model. Our meticulous data acquisition spans open-source and proprietary sources, including Common Crawl, Indic books, news articles, and Wikipedia, ensuring a diverse and rich linguistic representation. For each Indic language, we design a custom preprocessing pipeline to effectively eliminate redundant and low-quality text content. Additionally, we perform deduplication on Common Crawl data to address the redundancy present in 70% of the crawled web pages. This study focuses on developing high-quality data, optimizing tokenization for our multilingual dataset for Indic large language models with 3B and 7B parameters, engineered for superior performance in Indic languages. We introduce a novel multilingual tokenizer training strategy, demonstrating our custom-trained Indic tokenizer outperforms the state-of-the-art OpenAI Tiktoken tokenizer, achieving a superior token-to-word ratio for Indic languages.
翻译:本文提出了一种为开发多语言印度语言大语言模型进行数据准备的新颖方法。我们细致的数据采集工作覆盖了开源和专有来源,包括Common Crawl、印度语言书籍、新闻文章和维基百科,确保了多样且丰富的语言表征。针对每种印度语言,我们设计了定制化的预处理流程,以有效剔除冗余和低质量的文本内容。此外,我们对Common Crawl数据进行了去重处理,以应对其中70%的爬取网页存在的冗余问题。本研究专注于开发高质量数据,并针对我们为印度语言大语言模型(参数规模为30亿和70亿)准备的多语言数据集优化分词方案,旨在实现印度语言的卓越性能。我们引入了一种新颖的多语言分词器训练策略,结果表明,我们定制训练的印度语言分词器在印度语言上实现了更优的字符-词比例,其性能超越了当前最先进的OpenAI Tiktoken分词器。