Large language models (LLMs) trained on massive multilingual datasets hint at the formation of interlingual constructs--a shared subspace in the representation space. However, evidence regarding this phenomenon is mixed, leaving it unclear whether these models truly develop unified interlingual representations, or present a partially aligned constructs. We explore 31 diverse languages varying on their resource-levels, typologies, and geographical regions; and find that multilingual LLMs exhibit inconsistent cross-lingual alignments. To address this, we propose an interlingual representation framework identifying both the shared interlingual semantic subspace and fragmented components, existed due to representational limitations. We introduce Interlingual Local Overlap (ILO) score to quantify interlingual alignment by comparing the local neighborhood structures of high-dimensional representations. We utilize ILO to investigate the impact of single-language fine-tuning on the interlingual representations in multilingual LLMs. Our results indicate that training exclusively on a single language disrupts the alignment in early layers, while freezing these layers preserves the alignment of interlingual representations, leading to improved cross-lingual generalization. These results validate our framework and metric for evaluating interlingual representation, and further underscore that interlingual alignment is crucial for scalable multilingual learning.
翻译:在大量多语言数据集上训练的大规模语言模型(LLMs)暗示了跨语言结构的形成——即表征空间中的一个共享子空间。然而,关于这一现象的证据并不一致,目前尚不清楚这些模型是否真正形成了统一的跨语言表征,还是仅呈现出部分对齐的结构。我们探索了31种在资源水平、类型学和地理区域上各不相同的语言,发现多语言LLMs表现出不一致的跨语言对齐。为解决此问题,我们提出了一个跨语言表征框架,该框架既能识别共享的跨语言语义子空间,也能识别因表征局限性而存在的碎片化成分。我们引入了跨语言局部重叠度(Interlingual Local Overlap, ILO)评分,通过比较高维表征的局部邻域结构来量化跨语言对齐程度。我们利用ILO研究了单语言微调对多语言LLMs中跨语言表征的影响。我们的结果表明,仅对单一语言进行训练会破坏早期层的对齐,而冻结这些层可以保持跨语言表征的对齐,从而提升跨语言泛化能力。这些结果验证了我们用于评估跨语言表征的框架与度量方法,并进一步强调跨语言对齐对于可扩展的多语言学习至关重要。