Automated interlinear gloss prediction with neural networks is a promising approach to accelerate language documentation efforts. However, while state-of-the-art models like GlossLM achieve high scores on glossing benchmarks, user studies with linguists have found critical barriers to the usefulness of such models in real-world scenarios. In particular, existing models typically generate morpheme-level glosses but assign them to whole words without predicting the actual morpheme boundaries, making the predictions less interpretable and thus untrustworthy to human annotators. We conduct the first study on neural models that jointly predict interlinear glosses and the corresponding morphological segmentation from raw text. We run experiments to determine the optimal way to train models that balance segmentation and glossing accuracy, as well as the alignment between the two tasks. We extend the training corpus of GlossLM and pretrain PolyGloss, a family of seq2seq multilingual models for joint segmentation and glossing that outperforms GlossLM on glossing and beats various open-source LLMs on segmentation, glossing, and alignment. In addition, we demonstrate that PolyGloss can be quickly adapted to a new dataset via low-rank adaptation.
翻译:基于神经网络的自动化行间词形标注预测是加速语言文档化工作的有效途径。然而,尽管当前最优模型(如GlossLM)在词形标注基准测试中取得了高分,但针对语言学家的用户研究发现,此类模型在实际应用场景中存在关键性障碍。具体而言,现有模型通常生成词素级标注,却将其分配给完整词汇而不预测实际词素边界,导致预测结果可解释性降低,从而难以获得人工标注者的信任。本研究首次探索了从原始文本联合预测行间词形标注及其对应形态切分的神经网络模型。我们通过实验确定了平衡切分与标注精度以及两任务间对齐度的最优训练方案。通过扩展GlossLM的训练语料库,我们预训练了PolyGloss系列模型——专为联合分词与词形标注设计的seq2seq多语言模型。该模型在词形标注任务上超越GlossLM,在分词、标注及对齐任务上均优于各类开源大语言模型。此外,我们验证了PolyGloss可通过低秩适配技术快速适应新数据集。