This paper introduces XProvence, a multilingual zero-cost context pruning model for retrieval-augmented generation (RAG), trained on 16 languages and supporting 100+ languages through effective cross-lingual transfer. Motivated by the growing use of RAG systems across diverse languages, we explore several strategies to generalize the Provence framework-which first integrated efficient zero-cost context pruning directly into the re-ranking model-beyond English. Across four multilingual question answering benchmarks, we show how XProvence can prune RAG contexts with minimal-to-no performance degradation and outperforms strong baselines. Our model is available at https://huggingface.co/naver/xprovence-reranker-bgem3-v2.
翻译:本文提出XProvence,一种面向检索增强生成(RAG)的多语言零成本上下文剪枝模型,该模型基于16种语言训练,并通过有效的跨语言迁移支持100余种语言。受RAG系统在多语言场景中日益普及的驱动,我们探索了将Provence框架(首个将高效零成本上下文剪枝直接集成至重排序模型的框架)从英语推广至其他语言的多种策略。在四个多语言问答基准测试中,我们证明了XProvence能够在性能损失极小甚至无损的情况下对RAG上下文进行剪枝,并优于现有强基线模型。本模型已发布于 https://huggingface.co/naver/xprovence-reranker-bgem3-v2。