Many pretrained multilingual models exhibit cross-lingual transfer ability, which is often attributed to a learned language-neutral representation during pretraining. However, it remains unclear what factors contribute to the learning of a language-neutral representation, and whether the learned language-neutral representation suffices to facilitate cross-lingual transfer. We propose a synthetic task, Multilingual Othello (mOthello), as a testbed to delve into these two questions. We find that: (1) models trained with naive multilingual pretraining fail to learn a language-neutral representation across all input languages; (2) the introduction of "anchor tokens" (i.e., lexical items that are identical across languages) helps cross-lingual representation alignment; and (3) the learning of a language-neutral representation alone is not sufficient to facilitate cross-lingual transfer. Based on our findings, we propose a novel approach - multilingual pretraining with unified output space - that both induces the learning of language-neutral representation and facilitates cross-lingual transfer.
翻译:许多预训练多语言模型展现出跨语言迁移能力,这一能力通常归因于预训练过程中学到的语言无关表示。然而,目前尚不清楚哪些因素促使了语言无关表示的学习,以及所学的语言无关表示是否足以促进跨语言迁移。我们提出了一种合成任务——多语言奥赛罗(mOthello),作为深入探究这两个问题的测试平台。我们发现:(1)使用朴素多语言预训练训练的模型未能学会在所有输入语言上的语言无关表示;(2)引入"锚定标记"(即跨语言相同的词汇项)有助于跨语言表示对齐;(3)仅学习语言无关表示不足以促进跨语言迁移。基于我们的发现,我们提出了一种新方法——统一输出空间的多语言预训练——既诱导了语言无关表示的学习,又促进了跨语言迁移。