The strong capabilities of recent Large Language Models (LLMs) have made them highly effective for zero-shot re-ranking task. Attention-based re-ranking methods, which derive relevance scores directly from attention weights, offer an efficient and interpretable alternative to generation-based re-ranking methods. However, they still face two major limitations. First, attention signals are highly concentrated a small subset of tokens within a few documents, making others indistinguishable. Second, attention often overemphasizes phrases lexically similar to the query, yielding biased rankings that irrelevant documents with mere lexical resemblance are regarded as relevant. In this paper, we propose \textbf{ReAttn}, a post-hoc re-weighting strategy for attention-based re-ranking methods. It first compute the cross-document IDF weighting to down-weight attention on query-overlapping tokens that frequently appear across the candidate documents, reducing lexical bias and emphasizing distinctive terms. It then employs entropy-based regularization to mitigate over-concentrated attention, encouraging a more balanced distribution across informative tokens. Both adjustments operate directly on existing attention weights without additional training or supervision. Extensive experiments demonstrate the effectiveness of our method.
翻译:近期大型语言模型(LLM)的强大能力使其在零样本重排序任务中表现出色。基于注意力的重排序方法直接从注意力权重中推导相关性分数,为基于生成的重排序方法提供了一种高效且可解释的替代方案。然而,这类方法仍面临两大主要局限。首先,注意力信号高度集中于少数文档中的一小部分词元,导致其他词元难以区分。其次,注意力往往过度强调与查询词汇相似的短语,从而产生有偏的排序结果,即仅因词汇相似而被视为相关的无关文档。本文提出 \textbf{ReAttn},一种用于基于注意力的重排序方法的事后重加权策略。该方法首先计算跨文档逆文档频率(IDF)权重,以降低对在候选文档中频繁出现的查询重叠词元的注意力权重,从而减少词汇偏差并突出区分性术语。随后,采用基于熵的正则化来缓解注意力过度集中的问题,鼓励信息词元间更均衡的分布。两项调整均直接作用于现有注意力权重,无需额外训练或监督。大量实验验证了本方法的有效性。