Utilizing large language models (LLMs) for zero-shot document ranking is done in one of two ways: (1) prompt-based re-ranking methods, which require no further training but are only feasible for re-ranking a handful of candidate documents due to computational costs; and (2) unsupervised contrastive trained dense retrieval methods, which can retrieve relevant documents from the entire corpus but require a large amount of paired text data for contrastive training. In this paper, we propose PromptReps, which combines the advantages of both categories: no need for training and the ability to retrieve from the whole corpus. Our method only requires prompts to guide an LLM to generate query and document representations for effective document retrieval. Specifically, we prompt the LLMs to represent a given text using a single word, and then use the last token's hidden states and the corresponding logits associated with the prediction of the next token to construct a hybrid document retrieval system. The retrieval system harnesses both dense text embedding and sparse bag-of-words representations given by the LLM. Our experimental evaluation on the MSMARCO, TREC deep learning and BEIR zero-shot document retrieval datasets illustrates that this simple prompt-based LLM retrieval method can achieve a similar or higher retrieval effectiveness than state-of-the-art LLM embedding methods that are trained with large amounts of unsupervised data, especially when using a larger LLM.
翻译:利用大语言模型进行零样本文档排序主要采用两种方式:(1)基于提示的重排序方法,该方法无需额外训练,但由于计算成本限制仅适用于对少量候选文档进行重排序;(2)无监督对比训练的稠密检索方法,该方法能够从整个语料库中检索相关文档,但需要大量配对文本数据进行对比训练。本文提出PromptReps方法,融合了两类方法的优势:既无需训练,又能实现全语料库检索。我们的方法仅需通过提示引导大语言模型生成查询与文档表征,即可实现高效文档检索。具体而言,我们提示大语言模型使用单个词语表征给定文本,继而利用最后一个词元的隐藏状态及其对应预测下一个词元的逻辑值,构建混合文档检索系统。该检索系统同时利用大语言模型提供的稠密文本嵌入和稀疏词袋表征。我们在MSMARCO、TREC深度学习及BEIR零样本文档检索数据集上的实验评估表明,这种简单的基于提示的大语言模型检索方法,能够达到甚至超越当前最先进的、需要大量无监督数据训练的大语言模型嵌入方法的检索效能,在使用更大规模语言模型时效果尤为显著。