This paper addresses the challenges in developing language models for less-represented languages, with a focus on Luxembourgish. Despite its active development, Luxembourgish faces a digital data scarcity, exacerbated by Luxembourg's multilingual context. We propose a novel text generation model based on the T5 architecture, combining limited Luxembourgish data with equal amounts, in terms of size and type, of German and French data. We hypothesise that a model trained on Luxembourgish, German, and French will improve the model's cross-lingual transfer learning capabilities and outperform monolingual and large multilingual models. To verify this, the study at hand explores whether multilingual or monolingual training is more beneficial for Luxembourgish language generation. For the evaluation, we introduce LuxGen, a text generation benchmark that is the first of its kind for Luxembourgish.
翻译:本文针对开发低资源语言模型所面临的挑战,重点以卢森堡语为研究对象。尽管卢森堡语处于积极发展之中,但其数字数据稀缺,且卢森堡的多语言环境加剧了这一问题。我们提出了一种基于T5架构的新型文本生成模型,该模型将有限的卢森堡语数据与在规模和类型上等量的德语和法语数据相结合。我们假设,一个在卢森堡语、德语和法语上训练的模型将提升其跨语言迁移学习能力,并优于单语模型及大型多语言模型。为验证这一点,本研究探讨了多语言训练与单语训练哪种方式对卢森堡语文本生成更为有益。在评估方面,我们引入了LuxGen——首个针对卢森堡语的文本生成基准测试集。