We approach multilinguality as sense adaptation: aligning latent meaning representations across languages rather than relying solely on shared parameters and scale. In this paper, we introduce SENse-based Symmetric Interlingual Alignment (SENSIA), which adapts a Backpack language model from one language to another by explicitly aligning sense-level mixtures and contextual representations on parallel data, while jointly training a target-language language modeling loss to preserve fluency. Across benchmarks on four typologically diverse languages, SENSIA generally outperforms comparable multilingual alignment methods and achieves competitive accuracy against monolingual from-scratch baselines while using 2-4x less target-language data. Analyses of learned sense geometry indicate that local sense topology and global structure relative to English are largely preserved, and ablations show that the method is robust in terms of design and scale.
翻译:我们将多语言性视为语义适应:跨语言对齐潜在语义表征,而非单纯依赖共享参数与规模。本文提出基于语义的对称跨语言对齐方法(SENSIA),该方法通过在平行数据上显式对齐语义级混合表示与上下文表征,将Backpack语言模型从源语言适配至目标语言,同时联合训练目标语言建模损失以保持流畅性。在四种类型学差异显著语言的基准测试中,SENSIA普遍优于可比的多语言对齐方法,并在仅使用2-4倍少于目标语言数据的情况下,达到了与单语从头训练基线相当的准确率。对习得语义几何结构的分析表明,局部语义拓扑及相对于英语的全局结构均得到较好保持,消融实验则验证了该方法在设计与规模层面的鲁棒性。