Neural Machine Translation (NMT) models are typically trained on datasets with limited exposure to Scientific, Technical and Educational domains. Translation models thus, in general, struggle with tasks that involve scientific understanding or technical jargon. Their performance is found to be even worse for low-resource Indian languages. Finding a translation dataset that tends to these domains in particular, poses a difficult challenge. In this paper, we address this by creating a multilingual parallel corpus containing more than 2.8 million rows of English-to-Indic and Indic-to-Indic high-quality translation pairs across 8 Indian languages. We achieve this by bitext mining human-translated transcriptions of NPTEL video lectures. We also finetune and evaluate NMT models using this corpus and surpass all other publicly available models at in-domain tasks. We also demonstrate the potential for generalizing to out-of-domain translation tasks by improving the baseline by over 2 BLEU on average for these Indian languages on the Flores+ benchmark. We are pleased to release our model and dataset via this link: https://huggingface.co/SPRINGLab.
翻译:神经机器翻译模型通常在科学、技术及教育领域数据覆盖有限的训练集上进行训练,因此这类翻译模型在处理涉及科学理解或技术术语的任务时普遍存在困难。对于资源稀缺的印度语言而言,其翻译性能表现尤为不佳。寻找专门针对这些领域的翻译数据集构成了严峻挑战。本文通过构建一个包含超过280万行高质量翻译对的多语言平行语料库来解决该问题,该语料库涵盖8种印度语言,支持英语-印度语言及印度语言间的互译。我们通过对NPTEL视频讲座的人工翻译转录文本进行双语文本挖掘实现了这一目标。基于该语料库,我们对神经机器翻译模型进行微调与评估,在领域内任务中超越了所有其他公开可用的模型。通过在Flores+基准测试中将印度语言的平均基线性能提升超过2个BLEU值,我们还验证了模型向领域外翻译任务泛化的潜力。我们很高兴通过此链接发布我们的模型与数据集:https://huggingface.co/SPRINGLab。