Radiology report generation (RRG) has shown great potential in assisting radiologists by automating the labor-intensive task of report writing. While recent advancements have improved the quality and coherence of generated reports, ensuring their factual correctness remains a critical challenge. Although generative medical Vision Large Language Models (VLLMs) have been proposed to address this issue, these models are prone to hallucinations and can produce inaccurate diagnostic information. To address these concerns, we introduce a novel Semantic Consistency-Based Uncertainty Quantification framework that provides both report-level and sentence-level uncertainties. Unlike existing approaches, our method does not require modifications to the underlying model or access to its inner state, such as output token logits, thus serving as a plug-and-play module that can be seamlessly integrated with state-of-the-art models. Extensive experiments demonstrate the efficacy of our method in detecting hallucinations and enhancing the factual accuracy of automatically generated radiology reports. By abstaining from high-uncertainty reports, our approach improves factuality scores by $10$%, achieved by rejecting $20$% of reports using the Radialog model on the MIMIC-CXR dataset. Furthermore, sentence-level uncertainty flags the lowest-precision sentence in each report with an $82.9$% success rate.
翻译:放射学报告生成(RRG)通过自动化报告撰写这一劳动密集型任务,在辅助放射科医师方面展现出巨大潜力。尽管近期进展提升了生成报告的质量与连贯性,确保其事实正确性仍是关键挑战。虽然已有生成式医学视觉大语言模型(VLLM)被提出以解决此问题,但这些模型易产生幻觉并可能输出不准确的诊断信息。针对这些问题,我们提出一种新颖的基于语义一致性的不确定度量化框架,可同时提供报告级和句子级的不确定性度量。与现有方法不同,我们的方法无需修改底层模型或访问其内部状态(如输出词元对数概率),从而可作为即插即用模块与前沿模型无缝集成。大量实验证明,我们的方法在检测幻觉现象和提升自动生成放射学报告的事实准确性方面具有显著效果。通过舍弃高不确定性报告,我们的方法在MIMIC-CXR数据集上使用Radialog模型拒绝20%的报告,使事实性评分提升10%。此外,句子级不确定性能以82.9%的成功率标记每份报告中精确度最低的句子。