Decoding-free reranking methods that read relevance signals directly from LLM attention weights offer significant latency advantages over autoregressive approaches, yet suffer from attention score homogenization: middle-context documents receive near-identical scores, destroying the fine-grained distinctions required for ranking. We propose HeadRank, a framework that lifts preference optimization from discrete token space into the continuous attention domain through entropy-regularized head selection, hard adjacent-level preference pairs, and a distribution regularizer that jointly sharpen discriminability in the homogenized middle zone. Depth truncation at the deepest selected layer further reduces inference to $\mathcal{O}(1)$ forward passes. Across 14 benchmarks on three Qwen3 scales (0.6B--4B) using only 211 training queries, HeadRank consistently outperforms generative and decoding-free baselines with 100\% formatting success. At 4B, 57.4\% of relevant middle-zone documents reach the top quartile versus 14.2\% for irrelevant ones -- a 43-percentage-point selectivity gap that demonstrates the effectiveness of attention-space preference alignment for listwise reranking.
翻译:免解码重排序方法直接从大语言模型注意力权重中读取相关性信号,相比自回归方法具有显著的延迟优势,但存在注意力分数同质化问题:中间上下文的文档获得近乎相同的分数,破坏了排序所需的细粒度区分能力。本文提出HeadRank框架,通过熵正则化头选择、硬相邻级别偏好对以及联合锐化同质化中间区域判别力的分布正则化器,将偏好优化从离散令牌空间提升至连续注意力域。最深层选择的深度截断进一步将推理降至$\mathcal{O}(1)$次前向传播。在三个Qwen3规模(0.6B-4B)的14个基准测试中,仅使用211个训练查询,HeadRank即以100%格式成功率持续优于生成式与免解码基线方法。在4B规模下,57.4%的相关中间区域文档进入前四分之一区间,而无关文档仅为14.2%——这一43个百分点的选择性差距证明了注意力空间偏好对齐在列表式重排序中的有效性。