Existing multilingual embedding models often encounter challenges in cross-lingual scenarios due to imbalanced linguistic resources and less consideration of cross-lingual alignment during training. Although standardized contrastive learning approaches for cross-lingual adaptation are widely adopted, they may struggle to capture fundamental alignment between languages and degrade performance in well-aligned languages such as English. To address these challenges, we propose Cross-Lingual Enhancement in Retrieval via Reverse-training (CLEAR), a novel loss function utilizing a reverse training scheme to improve retrieval performance across diverse cross-lingual retrieval scenarios. CLEAR leverages an English passage as a bridge to strengthen alignments between the target language and English, ensuring robust performance in the cross-lingual retrieval task. Our extensive experiments demonstrate that CLEAR achieves notable improvements in cross-lingual scenarios, with gains up to 15%, particularly in low-resource languages, while minimizing performance degradation in English. Furthermore, our findings highlight that CLEAR offers promising effectiveness even in multilingual training, suggesting its potential for broad application and scalability. We release the code at https://github.com/dltmddbs100/CLEAR.
翻译:现有多种语言嵌入模型在跨语言场景中常面临挑战,这是由于语言资源分布不均衡以及训练过程中对跨语言对齐的考虑不足所致。尽管基于标准化对比学习的跨语言适配方法被广泛应用,但其在捕捉语言间基础对齐关系方面仍存在局限,且会损害英语等对齐良好语言的性能。为解决这些问题,我们提出基于反向训练的跨语言检索增强方法(CLEAR)——一种利用反向训练策略的新型损失函数,旨在提升多种跨语言检索场景下的检索性能。CLEAR以英文段落为桥梁,强化目标语言与英语之间的对齐关系,确保跨语言检索任务的稳健性能。大量实验表明,CLEAR在跨语言场景中实现了显著性能提升(最高达15%),尤其对低资源语言效果显著,同时将英语性能损失降至最低。此外,研究发现CLEAR即使在多语言训练中仍展现出优异效果,预示着其在大规模应用与可扩展性方面的潜力。我们在https://github.com/dltmddbs100/CLEAR 开源了相关代码。