Information retrieval methods often rely on a single embedding model trained on large, general-domain datasets like MSMARCO. While this approach can produce a retriever with reasonable overall performance, models trained on domain-specific data often yield better results within their respective domains. While prior work in information retrieval has tackled this through multi-task training, the topic of combining multiple domain-specific expert retrievers remains unexplored, despite its popularity in language model generation. In this work, we introduce RouterRetriever, a retrieval model that leverages multiple domain-specific experts along with a routing mechanism to select the most appropriate expert for each query. It is lightweight and allows easy addition or removal of experts without additional training. Evaluation on the BEIR benchmark demonstrates that RouterRetriever outperforms both MSMARCO-trained (+2.1 absolute nDCG@10) and multi-task trained (+3.2) models. This is achieved by employing our routing mechanism, which surpasses other routing techniques (+1.8 on average) commonly used in language modeling. Furthermore, the benefit generalizes well to other datasets, even in the absence of a specific expert on the dataset. To our knowledge, RouterRetriever is the first work to demonstrate the advantages of using multiple domain-specific expert embedding models with effective routing over a single, general-purpose embedding model in retrieval tasks.
翻译:信息检索方法通常依赖于在大型通用领域数据集(如MSMARCO)上训练的单一嵌入模型。虽然这种方法可以产生整体性能尚可的检索器,但在特定领域数据上训练的模型通常在其各自领域内能取得更好的效果。尽管先前的研究已通过多任务训练来解决这一问题,但结合多个领域特定专家检索器的主题仍未被探索,尽管这在语言模型生成领域已颇为流行。在本工作中,我们提出了RouterRetriever,这是一种利用多个领域特定专家并结合路由机制为每个查询选择最合适专家的检索模型。该模型轻量且易于增删专家而无需额外训练。在BEIR基准上的评估表明,RouterRetriever的表现优于仅在MSMARCO上训练的模型(nDCG@10绝对值提升+2.1)以及多任务训练模型(提升+3.2)。这一成果得益于我们采用的路由机制,其性能超越了语言建模中常用的其他路由技术(平均提升+1.8)。此外,即使在没有针对特定数据集的专家的情况下,该优势也能很好地推广到其他数据集。据我们所知,RouterRetriever是首个在检索任务中证明,通过有效路由结合多个领域特定专家嵌入模型优于单一通用嵌入模型的研究。