General-purpose large Vision-Language Models (VLMs) demonstrate strong capabilities in generating detailed descriptions for natural images. However, their performance in the medical domain remains suboptimal, even for relatively straightforward tasks, primarily due to the lack of large-scale, high-quality, specialized medical imaging datasets and the neglect of the diagnostic process that progresses from coarse to fine-grained. To address the first issue, we construct the CT-RATE-VQA dataset, which has 84K QA pairs. For the second issue, we propose MedReason-R1, a medical VLM with explicit reasoning process for disease diagnosis. MedReason-R1 incorporates a novel strategy that embeds zoom-in disease region-of-interest areas into the image, highlighting the crucial role of both global localization and disease-specific details in enhancing the model's diagnostic performance. Furthermore, we introduce the GRPO reinforcement learning framework to MedReason-R1, which enables effective reasoning without relying on costly manual annotations. Compared to recent general-purpose and medical VLMs, MedReason-R1 achieves state-of-the-art performance in CT disease diagnosis while retaining generalization. The code, checkpoints, and dataset are available at: https://github.com/Leevan001/MedReason-R1
翻译:通用大型视觉语言模型在生成自然图像的详细描述方面展现出强大能力。然而,其在医学领域(即便是相对简单的任务)的表现仍不理想,这主要归因于缺乏大规模、高质量的专业医学影像数据集,以及忽视了从粗粒度到细粒度的诊断过程。针对第一个问题,我们构建了包含84K个问答对的CT-RATE-VQA数据集。针对第二个问题,我们提出了MedReason-R1,一种具有显式推理过程的医学视觉语言模型,用于疾病诊断。MedReason-R1采用了一种新颖的策略,将放大的疾病感兴趣区域嵌入图像中,突显了全局定位与疾病特定细节在提升模型诊断性能中的关键作用。此外,我们将GRPO强化学习框架引入MedReason-R1,使其能够在无需依赖昂贵人工标注的情况下进行有效推理。与近期的通用及医学视觉语言模型相比,MedReason-R1在CT疾病诊断中实现了最先进的性能,同时保持了泛化能力。代码、检查点及数据集已公开于:https://github.com/Leevan001/MedReason-R1