This paper addresses the challenges in developing language models for less-represented languages, with a focus on Luxembourgish. Despite its active development, Luxembourgish faces a digital data scarcity, exacerbated by Luxembourg's multilingual context. We propose a novel text generation model based on the T5 architecture, combining limited Luxembourgish data with equal amounts, in terms of size and type, of German and French data. We hypothesise that a model trained on Luxembourgish, German, and French will improve the model's cross-lingual transfer learning capabilities and outperform monolingual and large multilingual models. To verify this, the study at hand explores whether multilingual or monolingual training is more beneficial for Luxembourgish language generation. For the evaluation, we introduce LuxGen, a text generation benchmark that is the first of its kind for Luxembourgish.
翻译:本文针对低资源语言开发语言模型所面临的挑战展开研究,重点关注卢森堡语。尽管卢森堡语正处于积极发展阶段,但其数字数据资源匮乏,卢森堡的多语言环境进一步加剧了这一困境。我们提出了一种基于T5架构的新型文本生成模型,将有限的卢森堡语数据与等量(在规模和类型上)的德语和法语数据相结合。我们假设,在卢森堡语、德语和法语上训练的模型能够提升模型的跨语言迁移学习能力,并优于单语及大型多语言模型。为验证此假设,本研究探讨了多语言训练与单语训练何者更有利于卢森堡语文本生成。在评估方面,我们引入了LuxGen——首个面向卢森堡语的文本生成基准测试集。