Recent advances have demonstrated that large language models (LLMs) excel as listwise rerankers, but their high computational demands remain a barrier to widespread adoption. Further, the traditional language modeling (LM) objective is not ideally suited for reranking tasks. FIRST is a novel approach that addresses these challenges by integrating a learning-to-rank objective and leveraging the logits of only the first generated token, thereby significantly reducing inference latency compared to traditional LLM rerankers. In this study, we extend the evaluation of FIRST to the TREC Deep Learning datasets (DL19-22), validating its robustness across diverse domains. We investigate the influence of different first-stage retrievers on FIRST rerankers, observing diminishing returns and patterns consistent with traditional LLM rerankers. Through applying the FIRST objective to a broader range of backbone models, we achieve effectiveness surpassing the original implementation. Our experiments confirm that fast reranking with single-token logits does not compromise out-of-domain reranking quality. To better quantify the computational savings in the original study, we measure and compare latency to find a 21%-42% gain across various models and benchmarks. Moreover, while LM training implicitly improves zero-shot single-token reranking, our experiments also raise questions about whether LM pre-training may hinder subsequent fine-tuning with the FIRST objective. These findings pave the way for more efficient and effective listwise reranking in future applications.
翻译:近期研究表明,大型语言模型(LLM)作为列表式重排器表现优异,但其高计算需求仍是广泛应用的障碍。此外,传统的语言建模(LM)目标并不完全适用于重排任务。FIRST作为一种新颖方法,通过整合学习排序目标并仅利用首个生成令牌的对数概率,显著降低了传统LLM重排器的推理延迟。本研究将FIRST的评估扩展至TREC深度学习数据集(DL19-22),验证了其在不同领域的鲁棒性。我们探究了不同第一阶段检索器对FIRST重排器的影响,观察到与传统LLM重排器一致的收益递减规律。通过将FIRST目标应用于更广泛的主干模型,我们实现了超越原始版本的效果。实验证实,基于单令牌对数概率的快速重排不会损害跨领域重排质量。为更准确量化原始研究的计算节省,我们测量并比较了延迟,发现不同模型和基准测试中均有21%-42%的提升。值得注意的是,虽然LM训练能隐式提升零样本单令牌重排性能,但实验也引发了对LM预训练是否可能阻碍后续FIRST目标微调的思考。这些发现为未来实现更高效、更有效的列表式重排铺平了道路。