Cross-encoders deliver state-of-the-art ranking effectiveness in information retrieval, but have a high inference cost. This prevents them from being used as first-stage rankers, but also incurs a cost when re-ranking documents. Prior work has addressed this bottleneck from two largely separate directions: accelerating cross-encoder inference by sparsifying the attention process or improving first-stage retrieval effectiveness using more complex models, e.g. late-interaction ones. In this work, we propose to bridge these two approaches, based on an in-depth understanding of the internal mechanisms of cross-encoders. Starting from cross-encoders, we show that it is possible to derive a new late-interaction-like architecture by carefully removing detrimental or unnecessary interactions. We name this architecture MICE (Minimal Interaction Cross-Encoders). We extensively evaluate MICE across both in-domain (ID) and out-of-domain (OOD) datasets. MICE decreases fourfold the inference latency compared to standard cross-encoders, matching late-interaction models like ColBERT while retaining most of cross-encoder ID effectiveness and demonstrating superior generalization abilities in OOD.
翻译:交叉编码器在信息检索中实现了最先进的排序效果,但具有较高的推理成本。这阻碍了其作为第一级排序器的应用,同时在重排序文档时也会产生显著开销。先前的研究主要从两个相对独立的方向解决这一瓶颈:通过稀疏化注意力过程来加速交叉编码器推理,或使用更复杂的模型(例如延迟交互模型)提升第一级检索效果。在本工作中,我们基于对交叉编码器内部机制的深入理解,提出将这两种方法相融合。从交叉编码器出发,我们证明通过精心移除有害或不必要的交互,可以推导出一种新型的类延迟交互架构。我们将该架构命名为MICE(最小交互交叉编码器)。我们在领域内(ID)和领域外(OOD)数据集上对MICE进行了全面评估。与标准交叉编码器相比,MICE将推理延迟降低了四倍,同时匹配了ColBERT等延迟交互模型的性能,保留了交叉编码器在领域内的大部分有效性,并在领域外场景中展现出更优越的泛化能力。