Retrieval systems generally focus on web-style queries that are short and underspecified. However, advances in language models have facilitated the nascent rise of retrieval models that can understand more complex queries with diverse intents. However, these efforts have focused exclusively on English; therefore, we do not yet understand how they work across languages. We introduce mFollowIR, a multilingual benchmark for measuring instruction-following ability in retrieval models. mFollowIR builds upon the TREC NeuCLIR narratives (or instructions) that span three diverse languages (Russian, Chinese, Persian) giving both query and instruction to the retrieval models. We make small changes to the narratives and isolate how well retrieval models can follow these nuanced changes. We present results for both multilingual (XX-XX) and cross-lingual (En-XX) performance. We see strong cross-lingual performance with English-based retrievers that trained using instructions, but find a notable drop in performance in the multilingual setting, indicating that more work is needed in developing data for instruction-based multilingual retrievers.
翻译:检索系统通常专注于简短且未充分说明的网络式查询。然而,语言模型的进步促进了检索模型的新兴发展,这些模型能够理解具有多样化意图的更复杂查询。然而,这些努力目前仅专注于英语;因此,我们尚不了解它们在不同语言中的表现。我们引入了mFollowIR,这是一个用于衡量检索模型指令遵循能力的多语言基准。mFollowIR基于涵盖三种不同语言(俄语、中文、波斯语)的TREC NeuCLIR叙述(或指令),为检索模型提供查询和指令。我们对叙述进行细微修改,以隔离并评估检索模型遵循这些细微变化的能力。我们展示了多语言(XX-XX)和跨语言(En-XX)性能的结果。我们发现,使用指令训练的基于英语的检索器在跨语言任务中表现强劲,但在多语言设置中性能显著下降,这表明在开发基于指令的多语言检索器数据方面仍需更多工作。