Recent developments in diffusion models have advanced conditioned image generation, yet they struggle with reconstructing out-of-distribution (OOD) images, such as unseen tumors in medical images, causing "image hallucination" and risking misdiagnosis. We hypothesize such hallucinations result from local OOD regions in the conditional images. We verify that partitioning the OOD region and conducting separate image generations alleviates hallucinations in several applications. From this, we propose a training-free diffusion framework that reduces hallucination with multiple Local Diffusion processes. Our approach involves OOD estimation followed by two modules: a "branching" module generates locally both within and outside OOD regions, and a "fusion" module integrates these predictions into one. Our evaluation shows our method mitigates hallucination over baseline models quantitatively and qualitatively, reducing misdiagnosis by 40% and 25% in the real-world medical and natural image datasets, respectively. It also demonstrates compatibility with various pre-trained diffusion models.
翻译:近期扩散模型的发展推动了条件图像生成的进步,但在重建分布外(OOD)图像(如医学图像中未见的肿瘤)时仍存在困难,导致"图像幻觉"现象并可能引发误诊风险。我们假设此类幻觉源于条件图像中的局部OOD区域,并通过实验验证:将OOD区域分割后分别进行图像生成能有效缓解多种应用场景中的幻觉问题。基于此,我们提出一种无需训练的扩散框架,通过多个局部扩散(Local Diffusion)过程减少幻觉。该方法首先进行OOD估计,随后包含两个模块:"分支"模块在OOD区域内外分别进行局部生成,"融合"模块将这些预测整合为统一输出。实验表明,与基线模型相比,本方法在定量和定性层面均能有效缓解幻觉,在真实医学图像和自然图像数据集上分别将误诊率降低40%和25%。此外,该方法展现出与多种预训练扩散模型的良好兼容性。