The principle that governs unsupervised multilingual learning (UCL) in jointly trained language models (mBERT as a popular example) is still being debated. Many find it surprising that one can achieve UCL with multiple monolingual corpora. In this work, we anchor UCL in the context of language decipherment and show that the joint training methodology is a decipherment process pivotal for UCL. In a controlled setting, we investigate the effect of different decipherment settings on the multilingual learning performance and consolidate the existing opinions on the contributing factors to multilinguality. From an information-theoretic perspective we draw a limit to the UCL performance and demonstrate the importance of token alignment in challenging decipherment settings caused by differences in the data domain, language order and tokenization granularity. Lastly, we apply lexical alignment to mBERT and investigate the contribution of aligning different lexicon groups to downstream performance.
翻译:联合训练语言模型(以广泛使用的mBERT为例)中无监督多语言学习的工作原理仍存在争议。许多人惊讶地发现,仅使用多语种单语语料库即可实现无监督多语言学习。本研究将无监督多语言学习置于语言破译的理论框架中,证明联合训练方法实质上是实现无监督多语言学习的关键破译过程。通过受控实验,我们系统探究了不同破译设置对多语言学习性能的影响,并整合了现有关于多语言能力影响因素的学术观点。从信息论视角出发,我们推导出无监督多语言学习的性能上限,并论证了在数据领域差异、语言顺序差异及分词粒度差异所导致的复杂破译场景中,词元对齐机制具有至关重要的作用。最后,我们将词汇对齐技术应用于mBERT模型,深入探究了不同词汇类别的对齐对下游任务性能的贡献度。