We evaluated 20+ Transformer models for ranking of long documents (including recent LongP models trained with FlashAttention) and compared them with simple FirstP baselines (applying the same model to input truncated to the first 512 tokens). We used MS MARCO Documents v1 as a primary training set and evaluated models in the zero-shot scenario as well as after fine-tuning on other collections. In our initial experiments with standard collections we found that long-document models underperformed FirstP or outperformed it by at most 5% on average in terms of MRR or NDCG. We then conjectured that this was not due to models inability to process long context but rather due to a positional bias of relevant passages, which tended to be among the first 512 document tokens. We found evidence that this bias was, indeed, present in at least two test sets, which motivated us to create a new collection MS MARCO FarRelevant where the relevant passages were not present among the first 512 tokens. Unlike standard collections where we observed both little benefit from incorporating longer contexts and limited variability in model performance (within a few %), experiments on MS MARCO FarRelevant uncovered dramatic differences among models. FirstP models performed roughly at the random-baseline level in both zero-shot and fine-tuning scenarios. Simple aggregation models (e.g., MaxP) had good zero-shot accuracy but benefited little from fine-tuning. Most other models had poor zero-shot performance (sometimes at a random baseline level) but outstripped MaxP by as much 13-28\% after finetuning. Thus, positional bias not only diminishes benefits of processing longer document contexts but also leads to model overfitting to this bias and performing poorly in a zero-shot setting when a distribution of relevant passages changes substantially. We make our software and MS MARCO FarRelevant available.
翻译:我们评估了20余种用于长文档排序的Transformer模型(包括近期基于FlashAttention训练的LongP模型),并将其与简单的FirstP基线方法(将同一模型应用于截断至前512个token的输入)进行比较。采用MS MARCO Documents v1作为主要训练集,分别在零样本场景及对其他数据集微调后评估模型性能。在标准数据集的初步实验中,我们发现长文档模型的表现不如FirstP,或平均仅以不超过5%的MRR或NDCG优势超越之。我们推测,这并非源于模型处理长上下文的能力不足,而是由于相关段落的位置偏差——这些段落往往出现在文档的前512个token中。实验证据表明,至少两个测试集中确实存在这种偏差,这促使我们构建了新的数据集MS MARCO FarRelevant,其中相关段落不在前512个token内。与标准数据集(其中融入更长上下文带来的收益极小,且模型性能差异有限,仅波动几个百分点)不同,MS MARCO FarRelevant上的实验揭示了模型间的显著差异。FirstP模型在零样本和微调场景下的表现均接近随机基线水平。简单聚合模型(如MaxP)在零样本下表现良好,但微调受益有限。其他大多数模型零样本性能较差(有时仅达随机基线水平),但在微调后超越MaxP达13-28%。因此,位置偏差不仅削弱了处理更长文档上下文的收益,还会导致模型对此偏差过拟合,并在相关段落分布显著变化时零样本性能表现不佳。我们公开了相关软件及MS MARCO FarRelevant数据集。