As the Information Retrieval (IR) field increasingly recognizes the importance of inclusivity, addressing the needs of low-resource languages remains a significant challenge. Transliteration between Urdu and its Romanized form, Roman Urdu, remains underexplored despite the widespread use of both scripts in South Asia. Prior work using RNNs on the Roman-Urdu-Parl dataset showed promising results but suffered from poor domain adaptability and limited evaluation. We propose a transformer-based approach using the m2m100 multilingual translation model, enhanced with masked language modeling (MLM) pretraining and fine-tuning on both Roman-Urdu-Parl and the domain-diverse Dakshina dataset. To address previous evaluation flaws, we introduce rigorous dataset splits and assess performance using BLEU, character-level BLEU, and CHRF. Our model achieves strong transliteration performance, with Char-BLEU scores of 96.37 for Urdu->Roman-Urdu and 97.44 for Roman-Urdu->Urdu. These results outperform both RNN baselines and GPT-4o Mini and demonstrate the effectiveness of multilingual transfer learning for low-resource transliteration tasks.
翻译:随着信息检索领域日益认识到包容性的重要性,满足低资源语言的需求仍然是一项重大挑战。尽管乌尔都语及其罗马化形式——罗马乌尔都语——在南亚地区被广泛使用,但两者之间的音译研究仍显不足。先前基于RNN在Roman-Urdu-Parl数据集上的研究虽显示出有希望的结果,但存在领域适应性差和评估有限的问题。我们提出了一种基于Transformer的方法,该方法采用m2m100多语言翻译模型,并通过掩码语言建模预训练以及在Roman-Urdu-Parl和领域多样的Dakshina数据集上的微调进行增强。为改进先前评估的不足,我们引入了严格的数据集划分,并使用BLEU、字符级BLEU和CHRF指标评估性能。我们的模型实现了强大的音译性能,乌尔都语->罗马乌尔都语的Char-BLEU得分为96.37,罗马乌尔都语->乌尔都语的得分为97.44。这些结果优于RNN基线模型和GPT-4o Mini,并证明了多语言迁移学习在低资源音译任务中的有效性。