The vision-language modeling capability of multi-modal large language models has attracted wide attention from the community. However, in medical domain, radiology report generation using vision-language models still faces significant challenges due to the imbalanced data distribution caused by numerous negated descriptions in radiology reports and issues such as rough alignment between radiology reports and radiography. In this paper, we propose a truthful radiology report generation framework, namely TRRG, based on stage-wise training for cross-modal disease clue injection into large language models. In pre-training stage, During the pre-training phase, contrastive learning is employed to enhance the ability of visual encoder to perceive fine-grained disease details. In fine-tuning stage, the clue injection module we proposed significantly enhances the disease-oriented perception capability of the large language model by effectively incorporating the robust zero-shot disease perception. Finally, through the cross-modal clue interaction module, our model effectively achieves the multi-granular interaction of visual embeddings and an arbitrary number of disease clue embeddings. This significantly enhances the report generation capability and clinical effectiveness of multi-modal large language models in the field of radiology reportgeneration. Experimental results demonstrate that our proposed pre-training and fine-tuning framework achieves state-of-the-art performance in radiology report generation on datasets such as IU-Xray and MIMIC-CXR. Further analysis indicates that our proposed method can effectively enhance the model to perceive diseases and improve its clinical effectiveness.
翻译:多模态大语言模型的视觉-语言建模能力已引起学术界的广泛关注。然而在医学领域,由于放射学报告中存在大量否定性描述导致的数据分布不平衡,以及放射学报告与影像之间对齐粗糙等问题,利用视觉-语言模型生成放射学报告仍面临重大挑战。本文提出一种基于分阶段训练的跨模态疾病线索注入大语言模型的真实放射学报告生成框架TRRG。在预训练阶段,通过对比学习增强视觉编码器感知细粒度疾病细节的能力。在微调阶段,我们提出的线索注入模块通过有效整合鲁棒的零样本疾病感知能力,显著提升了大语言模型的疾病导向感知能力。最后,通过跨模态线索交互模块,我们的模型实现了视觉嵌入与任意数量疾病线索嵌入的多粒度交互。这显著增强了多模态大语言模型在放射学报告生成领域的报告生成能力与临床有效性。实验结果表明,我们提出的预训练与微调框架在IU-Xray和MIMIC-CXR等数据集上实现了放射学报告生成的最先进性能。进一步分析表明,我们提出的方法能有效增强模型的疾病感知能力并提升其临床有效性。