Multimodal large language models (MLLMs) have recently achieved remarkable progress in radiology by integrating visual perception with natural language understanding. However, they often generate clinically unsupported descriptions, known as medical hallucinations, which pose serious risks in medical applications that demand accuracy and image-grounded outputs. Through empirical analysis, we find that prompt-induced hallucinations remain prevalent in radiology MLLMs, largely due to over-sensitivity to clinical sections. To address this, we introduce Clinical Contrastive Decoding (CCD), a training-free and retrieval-free inference framework that integrates structured clinical signals from task-specific radiology expert models. CCD introduces a dual-stage contrastive mechanism to refine token-level logits during generation, thereby enhancing clinical fidelity without modifying the base MLLM. Experiments on three datasets and multiple models demonstrate that CCD consistently improves overall performance on radiology report generation (RRG). On the MIMIC-CXR dataset, it yields up to a 17% improvement in RadGraph-F1 when applied to state-of-the-art RRG models. Our approach provides a lightweight and generalisable solution for mitigating medical hallucinations, effectively bridging expert models and MLLMs in radiology.
翻译:多模态大语言模型(MLLM)近期通过融合视觉感知与自然语言理解,在放射学领域取得了显著进展。然而,这些模型常生成缺乏临床依据的描述,即医学幻觉,这在要求精确性和图像依据的医疗应用中构成严重风险。通过实证分析,我们发现由提示引发的幻觉在放射学MLLM中仍然普遍存在,这主要源于其对临床章节的过度敏感性。为解决此问题,我们提出了临床对比解码(CCD),一种无需训练和检索的推理框架,它整合了来自特定任务放射学专家模型的结构化临床信号。CCD引入了一种双阶段对比机制,在生成过程中细化词元级对数概率,从而在不修改基础MLLM的情况下提升临床保真度。在三个数据集和多个模型上的实验表明,CCD能持续提升放射学报告生成(RRG)的整体性能。在MIMIC-CXR数据集上,当应用于最先进的RRG模型时,其在RadGraph-F1指标上实现了高达17%的提升。我们的方法为缓解医学幻觉提供了一种轻量级且可泛化的解决方案,有效连接了放射学领域的专家模型与MLLM。