Large vision language models (VLMs) have progressed incredibly from research to applicability for general-purpose use cases. LLaVA-Med, a pioneering large language and vision assistant for biomedicine, can perform multi-modal biomedical image and data analysis to provide a natural language interface for radiologists. While it is highly generalizable and works with multi-modal data, it is currently limited by well-known challenges that exist in the large language model space. Hallucinations and imprecision in responses can lead to misdiagnosis which currently hinder the clinical adaptability of VLMs. To create precise, user-friendly models in healthcare, we propose D-Rax -- a domain-specific, conversational, radiologic assistance tool that can be used to gain insights about a particular radiologic image. In this study, we enhance the conversational analysis of chest X-ray (CXR) images to support radiological reporting, offering comprehensive insights from medical imaging and aiding in the formulation of accurate diagnosis. D-Rax is achieved by fine-tuning the LLaVA-Med architecture on our curated enhanced instruction-following data, comprising of images, instructions, as well as disease diagnosis and demographic predictions derived from MIMIC-CXR imaging data, CXR-related visual question answer (VQA) pairs, and predictive outcomes from multiple expert AI models. We observe statistically significant improvement in responses when evaluated for both open and close-ended conversations. Leveraging the power of state-of-the-art diagnostic models combined with VLMs, D-Rax empowers clinicians to interact with medical images using natural language, which could potentially streamline their decision-making process, enhance diagnostic accuracy, and conserve their time.
翻译:大型视觉语言模型(VLM)已从研究阶段快速推进至通用场景的实际应用。LLaVA-Med作为生物医学领域开创性的大型语言与视觉助手,能够执行多模态生物医学图像与数据分析,为放射科医师提供自然语言交互界面。尽管该模型具备高度泛化能力并能处理多模态数据,但其仍受限于大型语言模型领域普遍存在的已知挑战。响应中的幻觉现象与不精确性可能导致误诊,这目前阻碍了VLM在临床场景的适应性。为构建精准、用户友好的医疗健康模型,我们提出D-Rax——一种领域特异性、对话式放射学辅助工具,可用于获取特定放射影像的深度解析。本研究通过增强胸部X光(CXR)图像的对话式分析以支持放射学报告生成,提供医学影像的全面洞察,并协助形成精准诊断。D-Rax通过对LLaVA-Med架构进行微调实现,训练数据采用我们精心构建的增强型指令遵循数据集,包含图像、指令、基于MIMIC-CXR影像数据推导的疾病诊断与人口统计学预测、CXR相关视觉问答(VQA)对,以及多个专家AI模型的预测结果。在开放性与封闭式对话的评估中,我们观察到模型响应取得了统计学显著改进。通过融合前沿诊断模型与VLM的技术优势,D-Rax使临床医师能够运用自然语言与医学影像进行交互,这有望优化临床决策流程、提升诊断准确性并节约医疗时间。