Large Language Models (LLMs) have been revolutionizing a myriad of natural language processing tasks with their diverse zero-shot capabilities. Indeed, existing work has shown that LLMs can be used to great effect for many tasks, such as information retrieval (IR), and passage ranking. However, current state-of-the-art results heavily lean on the capabilities of the LLM being used. Currently, proprietary, and very large LLMs such as GPT-4 are the highest performing passage re-rankers. Hence, users without the resources to leverage top of the line LLMs, or ones that are closed source, are at a disadvantage. In this paper, we investigate the use of a pre-filtering step before passage re-ranking in IR. Our experiments show that by using a small number of human generated relevance scores, coupled with LLM relevance scoring, it is effectively possible to filter out irrelevant passages before re-ranking. Our experiments also show that this pre-filtering then allows the LLM to perform significantly better at the re-ranking task. Indeed, our results show that smaller models such as Mixtral can become competitive with much larger proprietary models (e.g., ChatGPT and GPT-4).
翻译:大型语言模型(LLMs)凭借其多样化的零样本能力,正在彻底改变众多自然语言处理任务。现有研究已证明,LLMs在信息检索(IR)和段落排序等任务中能发挥卓越效果。然而,当前最先进的成果在很大程度上依赖于所用LLM的能力。目前,诸如GPT-4等专有且规模极大的LLMs是性能最高的段落重排序器。因此,无法利用顶尖LLM资源或只能使用闭源模型的用户处于劣势地位。本文研究了在信息检索的段落重排序步骤之前引入预过滤机制的效果。实验表明,通过结合少量人工生成的相关性分数与LLM相关性评分,可以在重排序前有效过滤无关段落。实验还证明,这种预过滤机制能使LLM在重排序任务中表现显著提升。事实上,我们的研究结果表明,较小规模的模型(如Mixtral)能够与ChatGPT和GPT-4等大型专有模型形成竞争态势。