We present "Paramanu", a family of novel language models (LM) for Indian languages, consisting of auto-regressive monolingual, bilingual, and multilingual models pretrained from scratch. Currently, it covers 10 languages (Assamese, Bangla, Hindi, Konkani, Maithili, Marathi, Odia, Sanskrit, Tamil, Telugu) across 5 scripts (Bangla, Devanagari, Odia, Tamil, Telugu). The models are pretrained on a single GPU with context size of 1024 and vary in size from 13.29 million (M) to 367.5 M parameters. We proposed a RoPE embedding scaling method that enables us to pretrain language models from scratch at larger sequence length context size than typical GPU memory permits. We also introduced a novel efficient Indic tokenizer, "mBharat", using a combination of BPE and Unigram, achieving the least fertility score and the ability to tokenize unseen languages in both the same script & Roman script. We also proposed and performed language-specific tokenization for multilingual models & domain-specific tokenization for monolingual models. To address the "curse of multilinguality" in our mParamanu model, we pretrained on comparable corpora based on typological grouping within the same script. Our findings show a language transfer phenomenon from low-resource to high-resource languages within languages of the same script & typology. Human evaluations for open-ended text generation demonstrated that Paramanu models outperformed several LLMs, despite being 20 to 64 times smaller. We created instruction-tuning datasets & instruction-tuned our models on 23,000 instructions in respective languages. Comparisons with multilingual LLMs across various benchmarks for natural language (NL) understanding, NL inference, & reading comprehension highlight the advantages of our models; leads to the conclusion that high quality generative LM are possible without high amount of compute power & enormous number of parameters.
翻译:本文提出"Paramanu"——一个面向印度语言的新型语言模型系列,包含自回归单语、双语及多语言模型,均从零开始预训练。该系列目前涵盖5种文字系统(孟加拉文、天城文、奥里亚文、泰米尔文、泰卢固文)下的10种语言(阿萨姆语、孟加拉语、印地语、孔卡尼语、迈蒂利语、马拉地语、奥里亚语、梵语、泰米尔语、泰卢固语)。所有模型均在单GPU上以1024上下文长度进行预训练,参数量从1329万至3.675亿不等。我们提出了一种RoPE嵌入缩放方法,使得能够在超出常规GPU显存限制的更长序列上下文长度下从零预训练语言模型。同时,我们引入新型高效印度语言分词器"mBharat",结合BPE与Unigram算法,实现了最低的生育率分数,并具备对同文字系统及罗马化拼写的未知语言进行分词的能力。我们还针对多语言模型提出并实施了语言特定分词方案,对单语模型提出并实施了领域特定分词方案。为应对mParamanu模型中的"多语言诅咒"问题,我们基于同一文字系统内的类型学分组构建可比语料进行预训练。研究发现,在相同文字系统与类型学特征的语言间存在从低资源语言到高资源语言的知识迁移现象。开放式文本生成的人类评估表明,Paramanu模型在参数量仅为对比模型的1/20至1/64的情况下,性能仍超越多个大型语言模型。我们构建了指令微调数据集,并以各语言对应的2.3万条指令对模型进行指令微调。与多语言大模型在自然语言理解、自然语言推理及阅读理解等多项基准测试中的对比结果凸显了我们模型的优势;由此得出结论:无需巨额算力与海量参数,同样可以构建高质量的生成式语言模型。