Recent advances have demonstrated that large language models (LLMs) excel as listwise rerankers, but their high computational demands remain a barrier to widespread adoption. Further, the traditional language modeling (LM) objective is not ideally suited for reranking tasks. FIRST is a novel approach that addresses these challenges by integrating a learning-to-rank objective and leveraging the logits of only the first generated token, thereby significantly reducing inference latency compared to traditional LLM rerankers. In this study, we extend the evaluation of FIRST to the TREC Deep Learning datasets (DL19-22), validating its robustness across diverse domains. We investigate the influence of different first-stage retrievers on FIRST rerankers, observing diminishing returns and patterns consistent with traditional LLM rerankers. Through applying the FIRST objective to a broader range of backbone models, we achieve effectiveness surpassing the original implementation. Our experiments confirm that fast reranking with single-token logits does not compromise out-of-domain reranking quality. To better quantify the computational savings in the original study, we measure and compare latency to find a 21%-42% gain across various models and benchmarks. Moreover, while LM training implicitly improves zero-shot single-token reranking, our experiments also raise questions about whether LM pre-training may hinder subsequent fine-tuning with the FIRST objective. These findings pave the way for more efficient and effective listwise reranking in future applications.
翻译:近期研究表明,大型语言模型(LLM)在列表式重排序任务中表现优异,但其高计算需求仍是广泛应用的障碍。此外,传统的语言建模(LM)目标并不完全适用于重排序任务。FIRST作为一种新颖方法,通过整合学习排序目标并仅利用首个生成令牌的logits,显著降低了传统LLM重排序器的推理延迟,从而应对这些挑战。本研究将FIRST的评估扩展至TREC深度学习数据集(DL19-22),验证了其在多领域中的鲁棒性。我们探究了不同第一阶段检索器对FIRST重排序器的影响,观察到与传统LLM重排序器一致的收益递减规律和模式。通过将FIRST目标应用于更广泛的骨干模型,我们实现了超越原始版本的效果。实验证实,基于单令牌logits的快速重排序不会损害跨领域重排序质量。为更准确量化原始研究的计算节省,我们测量并比较了延迟,发现不同模型和基准测试中均有21%-42%的提升。此外,虽然LM训练隐式改善了零样本单令牌重排序,但实验也引发了对LM预训练是否可能阻碍后续FIRST目标微调的思考。这些发现为未来应用中更高效、更有效的列表式重排序铺平了道路。