We introduce a set of training-free ABX-style discrimination tasks to evaluate how multilingual language models represent language identity (form) and semantic content (meaning). Inspired from speech processing, these zero-shot tasks measure whether minimal differences in representation can be reliably detected. This offers a flexible and interpretable alternative to probing. Applied to XLM-R (Conneau et al, 2020) across pretraining checkpoints and layers, we find that language discrimination declines over training and becomes concentrated in lower layers, while meaning discrimination strengthens over time and stabilizes in deeper layers. We then explore probing tasks, showing some alignment between our metrics and linguistic learning performance. Our results position ABX tasks as a lightweight framework for analyzing the structure of multilingual representations.
翻译:我们提出了一组无需训练的ABX式区分任务,用于评估多语言语言模型如何表征语言身份(形式)与语义内容(意义)。受语音处理领域启发,这些零样本任务旨在测量表征中的细微差异是否能够被可靠地检测。这为探针分析提供了一种灵活且可解释的替代方案。通过对XLM-R模型(Conneau等人,2020)在不同预训练检查点和网络层中的应用,我们发现语言区分能力随训练过程逐渐减弱并集中于较低层级,而意义区分能力则随时间增强并在深层趋于稳定。随后我们进一步探索了探针任务,结果显示我们的度量指标与语言学学习性能之间存在一定对应关系。本研究将ABX任务确立为一个轻量级分析框架,用于解析多语言表征的结构特性。