Large Reasoning Models (LRMs) have introduced a new paradigm in AI by enabling models to ``think before responding" via chain-of-thought reasoning. However, the absence of open and reproducible recipes for building reasoning-centric medical LMMs hinders community-wide research, analysis, and comparison. In this paper, we present MedVLThinker, a suite of simple yet strong baselines. Our fully open recipe consists of: (1) systematic data curation for both text-only and image-text medical data, filtered according to varying levels of reasoning difficulty, and (2) two training paradigms: Supervised Fine-Tuning (SFT) on distilled reasoning traces and Reinforcement Learning with Verifiable Rewards (RLVR) based on final answer correctness. Across extensive experiments on the Qwen2.5-VL model family (3B, 7B) and six medical QA benchmarks, we find that RLVR consistently and significantly outperforms SFT. Additionally, under the RLVR framework, a key, counter-intuitive finding is that training on our curated text-only reasoning data provides a more substantial performance boost than training on multimodal image-text data. Our best open 7B model, trained using the RLVR recipe on text-only data, establishes a new state-of-the-art on existing public VQA benchmarks, surpassing all previous open-source medical LMMs. Furthermore, scaling our model to 32B achieves performance on par with the proprietary GPT-4o. We release all curated data, models, and code to provide the community with a strong, open foundation for future research in multimodal medical reasoning.
翻译:大型推理模型通过思维链推理实现了“先思考后回答”的新范式。然而,由于缺乏构建以推理为中心的医学大型多模态模型的开放且可复现的方案,阻碍了整个社区的研究、分析与比较。本文提出了MedVLThinker,一套简单而强大的基线模型。我们完全开放的方案包含:(1)针对纯文本和图文医学数据的系统性数据整理,数据根据不同的推理难度级别进行筛选;(2)两种训练范式:基于蒸馏推理轨迹的监督微调,以及基于最终答案正确性的可验证奖励强化学习。在Qwen2.5-VL模型系列和六个医学问答基准上进行的大量实验表明,RLVR始终显著优于SFT。此外,在RLVR框架下,一个关键且反直觉的发现是:使用我们整理的纯文本推理数据进行训练,比使用多模态图文数据训练能带来更显著的性能提升。我们基于纯文本数据采用RLVR方案训练的最佳开放7B模型,在现有公开VQA基准上取得了新的最优性能,超越了所有先前开源的医学LMM。进一步将模型扩展到32B时,其性能可与专有的GPT-4o相媲美。我们发布了所有整理的数据、模型和代码,为社区未来的多模态医学推理研究提供了一个强大的开放基础。