We present a compact, single-model approach to multilingual inflection, the task of generating inflected word forms from base lemmas to express grammatical categories. Our model, trained jointly on data from 73 languages, is lightweight, robust to unseen words, and outperforms monolingual baselines in most languages. This demonstrates the effectiveness of multilingual modeling for inflection and highlights its practical benefits: simplifying deployment by eliminating the need to manage and retrain dozens of separate monolingual models. In addition to the standard SIGMORPHON shared task benchmarks, we evaluate our monolingual and multilingual models on 73 Universal Dependencies (UD) treebanks, extracting lemma-tag-form triples and their frequency counts. To ensure realistic data splits, we introduce a novel frequency-weighted, lemma-disjoint train-dev-test resampling procedure. Our work addresses the lack of an open-source, general-purpose, multilingual morphological inflection system capable of handling unseen words across a wide range of languages, including Czech. All code is publicly released at: https://github.com/tomsouri/multilingual-inflection.
翻译:本文提出一种紧凑的单模型多语言屈折变化方法,该任务旨在根据基本词元生成屈折词形以表达语法范畴。我们的模型在73种语言数据上联合训练,具有轻量化、对未登录词鲁棒性强等特性,在多数语言中表现优于单语言基线模型。这证明了多语言建模在屈折变化任务中的有效性,并凸显了其实际优势:通过避免管理和重新训练数十个独立单语言模型,显著简化了部署流程。除标准SIGMORPHON共享任务基准外,我们在73个通用依存树库上评估了单语言与多语言模型,通过提取词元-标签-词形三元组及其频次统计进行分析。为确保数据划分的合理性,我们提出了一种创新的基于频率加权和词元分离的训练-开发-测试重采样方法。本研究致力于填补当前开源多语言形态屈折系统中存在的空白——缺乏能够跨多种语言(包括捷克语)处理未登录词的通用型解决方案。所有代码已在以下地址开源发布:https://github.com/tomsouri/multilingual-inflection。