Large language models demonstrate reasonable multilingual abilities, despite predominantly English-centric pretraining. However, the spontaneous multilingual alignment in these models is shown to be weak, leading to unsatisfactory cross-lingual transfer and knowledge sharing. Previous works attempt to address this issue by explicitly injecting multilingual alignment information during or after pretraining. Thus for the early stage in pretraining, the alignment is weak for sharing information or knowledge across languages. In this paper, we propose PreAlign, a framework that establishes multilingual alignment prior to language model pretraining. PreAlign injects multilingual alignment by initializing the model to generate similar representations of aligned words and preserves this alignment using a code-switching strategy during pretraining. Extensive experiments in a synthetic English to English-Clone setting demonstrate that PreAlign significantly outperforms standard multilingual joint training in language modeling, zero-shot cross-lingual transfer, and cross-lingual knowledge application. Further experiments in real-world scenarios further validate PreAlign's effectiveness across various model sizes.
翻译:大型语言模型尽管主要基于英语进行预训练,但仍展现出一定的多语言能力。然而,这些模型中自发的多语言对齐被证明较为薄弱,导致跨语言迁移与知识共享效果不佳。先前的研究尝试通过在预训练期间或之后显式注入多语言对齐信息来解决这一问题。因此在预训练的早期阶段,跨语言的信息或知识共享因对齐薄弱而受限。本文提出PreAlign框架,该框架在语言模型预训练之前建立多语言对齐。PreAlign通过初始化模型使其生成对齐词汇的相似表示来注入多语言对齐,并在预训练期间采用代码切换策略保持这种对齐。在合成的英语到英语克隆场景中的大量实验表明,PreAlign在语言建模、零样本跨语言迁移和跨语言知识应用方面显著优于标准的多语言联合训练方法。在真实场景中的进一步实验也验证了PreAlign在不同模型规模下的有效性。