With the development of pre-trained language models, the dense retrieval models have become promising alternatives to the traditional retrieval models that rely on exact match and sparse bag-of-words representations. Different from most dense retrieval models using a bi-encoder to encode each query or document into a dense vector, the recently proposed late-interaction multi-vector models (i.e., ColBERT and COIL) achieve state-of-the-art retrieval effectiveness by using all token embeddings to represent documents and queries and modeling their relevance with a sum-of-max operation. However, these fine-grained representations may cause unacceptable storage overhead for practical search systems. In this study, we systematically analyze the matching mechanism of these late-interaction models and show that the sum-of-max operation heavily relies on the co-occurrence signals and some important words in the document. Based on these findings, we then propose several simple document pruning methods to reduce the storage overhead and compare the effectiveness of different pruning methods on different late-interaction models. We also leverage query pruning methods to further reduce the retrieval latency. We conduct extensive experiments on both in-domain and out-domain datasets and show that some of the used pruning methods can significantly improve the efficiency of these late-interaction models without substantially hurting their retrieval effectiveness.
翻译:随着预训练语言模型的发展,稠密检索模型已成为传统依赖精确匹配和稀疏词袋表示的检索模型的有前景替代方案。与多数采用双编码器将每个查询或文档编码为稠密向量的稠密检索模型不同,近期提出的晚期交互多向量模型(如ColBERT和COIL)通过使用所有令牌嵌入来表示文档和查询,并采用求和-最大化操作建模其相关性,实现了最先进的检索效果。然而,这种细粒度表示可能给实际搜索系统带来不可接受的存储开销。本研究系统分析了这些晚期交互模型的匹配机制,揭示了求和-最大化操作高度依赖共现信号和文档中的关键词语。基于这些发现,我们提出若干简单的文档剪枝方法来降低存储开销,并比较了不同剪枝方法在不同晚期交互模型上的效果。同时,我们利用查询剪枝方法进一步减少检索延迟。我们在领域内和跨领域数据集上进行了广泛实验,结果表明部分剪枝方法能在不显著损害检索效果的前提下,显著提升这些晚期交互模型的效率。