In this paper, we investigate the use of N-gram models and Large Pre-trained Multilingual models for Language Identification (LID) across 11 South African languages. For N-gram models, this study shows that effective data size selection remains crucial for establishing effective frequency distributions of the target languages, that efficiently model each language, thus, improving language ranking. For pre-trained multilingual models, we conduct extensive experiments covering a diverse set of massively pre-trained multilingual (PLM) models -- mBERT, RemBERT, XLM-r, and Afri-centric multilingual models -- AfriBERTa, Afro-XLMr, AfroLM, and Serengeti. We further compare these models with available large-scale Language Identification tools: Compact Language Detector v3 (CLD V3), AfroLID, GlotLID, and OpenLID to highlight the importance of focused-based LID. From these, we show that Serengeti is a superior model across models: N-grams to Transformers on average. Moreover, we propose a lightweight BERT-based LID model (za_BERT_lid) trained with NHCLT + Vukzenzele corpus, which performs on par with our best-performing Afri-centric models.
翻译:本文研究了N-gram模型与大型预训练多语言模型在11种南非语言的语言识别任务中的应用。对于N-gram模型,本研究表明,有效的数据规模选择对于建立目标语言的高效频率分布仍然至关重要,这能有效建模每种语言,从而提升语言排序性能。在预训练多语言模型方面,我们进行了广泛的实验,涵盖多种大规模预训练多语言模型——包括mBERT、RemBERT、XLM-r,以及非洲中心的多语言模型——AfriBERTa、Afro-XLMr、AfroLM和Serengeti。我们进一步将这些模型与现有的大规模语言识别工具:Compact Language Detector v3、AfroLID、GlotLID和OpenLID进行比较,以突显聚焦式语言识别的重要性。实验结果表明,Serengeti在从N-gram到Transformer的各类模型中平均表现最优。此外,我们提出了一种基于BERT的轻量级语言识别模型,该模型使用NHCLT + Vukzenzele语料库训练,其性能与我们表现最佳的非洲中心模型相当。