This study investigates the explainability of generative diffusion models in the context of medical imaging, focusing on Magnetic resonance imaging (MRI) synthesis. Although diffusion models have shown strong performance in generating realistic medical images, their internal decision making process remains largely opaque. We present a faithfulness-based explainability framework that analyzes how prototype-based explainability methods like ProtoPNet (PPNet), Enhanced ProtoPNet (EPPNet), and ProtoPool can link the relationship between generated and training features. Our study focuses on understanding the reasoning behind image formation through denoising trajectory of diffusion model and subsequently prototype explainability with faithfulness analysis. Experimental analysis shows that EPPNet achieves the highest faithfulness (with score 0.1534), offering more reliable insights, and explainability into the generative process. The results highlight that diffusion models can be made more transparent and trustworthy through faithfulness-based explanations, contributing to safer and more interpretable applications of generative AI in healthcare.
翻译:本研究探讨了生成式扩散模型在医学影像(特别是磁共振成像合成)背景下的可解释性。尽管扩散模型在生成逼真医学图像方面表现出卓越性能,但其内部决策过程在很大程度上仍不透明。我们提出了一个基于忠实度的可解释性框架,用于分析基于原型的可解释性方法(如ProtoPNet、增强型ProtoPNet和ProtoPool)如何关联生成特征与训练特征之间的关系。本研究重点通过扩散模型的去噪轨迹理解图像形成的推理机制,进而结合忠实度分析进行原型可解释性评估。实验分析表明,增强型ProtoPNet获得了最高的忠实度得分(0.1534),为生成过程提供了更可靠的可解释性见解。研究结果证明,基于忠实度的解释方法能够增强扩散模型的透明度和可信度,有助于推动生成式人工智能在医疗领域实现更安全、更可解释的应用。