Supervised Fine-Tuning (SFT) of the language backbone plays a pivotal role in adapting Vision-Language Models (VLMs) to specialized domains such as medical reasoning. However, existing SFT practices often rely on unfiltered textual datasets that contain redundant and low-quality samples, leading to substantial computational costs and suboptimal performance in complex clinical scenarios. Although existing methods attempt to alleviate this problem by selecting data based on sample difficulty, defined by knowledge and reasoning complexity, they overlook each sample's optimization utility reflected in its gradient. Interestingly, we find that gradient-based influence alone favors easy-to-optimize samples that cause large parameter shifts but lack deep reasoning chains, while difficulty alone selects noisy or overly complex textual cases that fail to guide stable optimization. Based on this observation, we propose a data selection strategy, Difficulty-Influence Quadrant (DIQ), which prioritizes samples in the "high-difficulty-high-influence" quadrant to balance complex clinical reasoning with substantial gradient influence. This enables efficient medical reasoning for VLMs with minimal fine-tuning data. Furthermore, Human and LLM-as-a-judge evaluations show that DIQ-selected subsets demonstrate higher data quality and generate clinical reasoning that is more aligned with expert practices in differential diagnosis, safety check, and evidence citation, as DIQ emphasizes samples that foster expert-like reasoning patterns. Extensive experiments on medical reasoning benchmarks demonstrate that DIQ enables VLM backbones fine-tuned on only 1% of selected data to match full-dataset performance, while using 10% consistently outperforms baseline methods, highlighting the superiority of principled data selection over brute-force scaling. The code is available at https://github.com/mihara-bot/DIQ.
翻译:监督微调(SFT)在将视觉语言模型(VLMs)适配至医学推理等专业领域的过程中起着关键作用。然而,现有的SFT实践通常依赖于未经筛选的文本数据集,这些数据集中包含冗余和低质量的样本,导致高昂的计算成本,并在复杂临床场景中表现欠佳。尽管现有方法尝试通过基于样本难度(由知识和推理复杂度定义)的数据选择来缓解此问题,但它们忽略了每个样本在其梯度中反映出的优化效用。有趣的是,我们发现仅基于梯度影响力的方法倾向于选择易于优化、能引起较大参数变化但缺乏深度推理链的样本;而仅基于难度的方法则会选择噪声大或过于复杂的文本案例,这些案例无法引导稳定的优化。基于此观察,我们提出了一种数据选择策略——难度-影响力象限(DIQ),该策略优先选择“高难度-高影响力”象限中的样本,以平衡复杂的临床推理与显著的梯度影响力。这使得VLMs能够以最少的微调数据实现高效的医学推理。此外,人类评估和LLM-as-a-judge评估表明,DIQ选择的子集展现出更高的数据质量,并在鉴别诊断、安全检查与证据引用方面生成的临床推理更符合专家实践,因为DIQ强调那些能培养类专家推理模式的样本。在医学推理基准上的大量实验证明,DIQ使得仅用1%精选数据微调的VLM主干网络即可达到全数据集的性能,而使用10%的数据则持续优于基线方法,凸显了基于原理的数据选择相对于暴力扩展的优越性。代码发布于 https://github.com/mihara-bot/DIQ。