This paper investigates the counterintuitive phenomenon where fine-tuning pre-trained transformer models degrades performance on the MS MARCO passage ranking task. Through comprehensive experiments involving five model variants-including full parameter fine-tuning and parameter efficient LoRA adaptations-we demonstrate that all fine-tuning approaches underperform the base sentence-transformers/all- MiniLM-L6-v2 model (MRR@10: 0.3026). Our analysis reveals that fine-tuning disrupts the optimal embedding space structure learned during the base model's extensive pre-training on 1 billion sentence pairs, including 9.1 million MS MARCO samples. UMAP visualizations show progressive embedding space flattening, while training dynamics analysis and computational efficiency metrics further support our findings. These results challenge conventional wisdom about transfer learning effectiveness on saturated benchmarks and suggest architectural innovations may be necessary for meaningful improvements.
翻译:本文研究了一个反直觉现象:在MS MARCO段落排序任务中,对预训练的Transformer模型进行微调反而会降低其性能。通过涉及五种模型变体(包括全参数微调和参数高效的LoRA适配)的综合实验,我们证明所有微调方法的性能均不及基础模型sentence-transformers/all-MiniLM-L6-v2(MRR@10: 0.3026)。分析表明,微调破坏了基础模型在10亿句对(包含910万个MS MARCO样本)上进行大规模预训练时学习到的最优嵌入空间结构。UMAP可视化显示了嵌入空间逐渐扁平化的过程,而训练动态分析和计算效率指标进一步支持了我们的发现。这些结果挑战了关于迁移学习在饱和基准测试中有效性的传统观点,并表明可能需要架构创新才能实现实质性改进。