Cross-Encoder (CE) and Dual-Encoder (DE) models are two fundamental approaches for query-document relevance in information retrieval. To predict relevance, CE models use joint query-document embeddings, while DE models maintain factorized query and document embeddings; usually, the former has higher quality while the latter benefits from lower latency. Recently, late-interaction models have been proposed to realize more favorable latency-quality tradeoffs, by using a DE structure followed by a lightweight scorer based on query and document token embeddings. However, these lightweight scorers are often hand-crafted, and there is no understanding of their approximation power; further, such scorers require access to individual document token embeddings, which imposes an increased latency and storage burden. In this paper, we propose novel learnable late-interaction models (LITE) that resolve these issues. Theoretically, we prove that LITE is a universal approximator of continuous scoring functions, even for relatively small embedding dimension. Empirically, LITE outperforms previous late-interaction models such as ColBERT on both in-domain and zero-shot re-ranking tasks. For instance, experiments on MS MARCO passage re-ranking show that LITE not only yields a model with better generalization, but also lowers latency and requires 0.25x storage compared to ColBERT.
翻译:交叉编码器(CE)与双编码器(DE)模型是信息检索中查询-文档相关性建模的两种基础方法。为预测相关性,CE模型使用联合的查询-文档嵌入表示,而DE模型则保持因子化的查询与文档嵌入;通常前者具有更高的质量,而后者则受益于更低的延迟。近年来,研究者提出了延迟交互模型,通过采用DE结构并辅以基于查询与文档词元嵌入的轻量级评分器,以实现更优的延迟-质量权衡。然而,这些轻量级评分器往往是人工设计的,其近似能力尚不明确;此外,此类评分器需要访问单个文档词元嵌入,这增加了延迟与存储负担。本文提出了一种新颖的可学习延迟交互模型(LITE),以解决上述问题。理论上,我们证明即使嵌入维度相对较小,LITE仍能作为连续评分函数的通用逼近器。实证研究表明,在领域内及零样本重排序任务上,LITE均优于ColBERT等先前的延迟交互模型。例如,在MS MARCO段落重排序实验中的结果表明,相较于ColBERT,LITE不仅能够产生泛化能力更强的模型,同时还能降低延迟,并仅需其0.25倍的存储空间。