The adaptation of large-scale Vision-Language Models (VLMs) through post-training reveals a pronounced generalization gap: models fine-tuned with Reinforcement Learning (RL) consistently achieve superior out-of-distribution (OOD) performance compared to those trained with Supervised Fine-Tuning (SFT). This paper posits a data-centric explanation for this phenomenon, contending that RL's generalization advantage arises from an implicit data filtering mechanism that inherently prioritizes medium-difficulty training samples. To test this hypothesis, we systematically evaluate the OOD generalization of SFT models across training datasets of varying difficulty levels. Our results confirm that data difficulty is a critical factor, revealing that training on hard samples significantly degrades OOD performance. Motivated by this finding, we introduce Difficulty-Curated SFT (DC-SFT), a straightforward method that explicitly filters the training set based on sample difficulty. Experiments show that DC-SFT not only substantially enhances OOD generalization over standard SFT, but also surpasses the performance of RL-based training, all while providing greater stability and computational efficiency. This work offers a data-centric account of the OOD generalization gap in VLMs and establishes a more efficient pathway to achieving robust generalization. Code is available at https://github.com/byyx666/DC-SFT.
翻译:大规模视觉语言模型(VLM)通过后训练进行适配时,呈现出显著的泛化差距:与采用监督微调(SFT)训练的模型相比,使用强化学习(RL)微调的模型在分布外(OOD)性能上始终表现更优。本文从数据中心的视角对这一现象提出解释,认为RL的泛化优势源于一种隐式的数据过滤机制,该机制本质上优先选择了中等难度的训练样本。为验证这一假设,我们系统评估了SFT模型在不同难度级别训练数据集上的OOD泛化能力。实验结果证实数据难度是关键因素,表明在困难样本上训练会显著降低OOD性能。基于这一发现,我们提出了难度筛选监督微调(DC-SFT),这是一种根据样本难度显式过滤训练集的简单方法。实验表明,DC-SFT不仅显著提升了标准SFT的OOD泛化能力,其性能甚至超越了基于RL的训练方法,同时提供了更高的稳定性和计算效率。这项工作为VLM中的OOD泛化差距提供了数据中心的解释,并建立了一条实现鲁棒泛化的更高效路径。代码发布于 https://github.com/byyx666/DC-SFT。