Recent developments in diffusion models have advanced conditioned image generation, yet they struggle with reconstructing out-of-distribution (OOD) images, such as unseen tumors in medical images, causing ``image hallucination'' and risking misdiagnosis. We hypothesize such hallucinations result from local OOD regions in the conditional images. We verify that partitioning the OOD region and conducting separate image generations alleviates hallucinations in several applications. From this, we propose a training-free diffusion framework that reduces hallucination with multiple Local Diffusion processes. Our approach involves OOD estimation followed by two modules: a ``branching'' module generates locally both within and outside OOD regions, and a ``fusion'' module integrates these predictions into one. Our evaluation shows our method mitigates hallucination over baseline models quantitatively and qualitatively, reducing misdiagnosis by 40% and 25% in the real-world medical and natural image datasets, respectively. It also demonstrates compatibility with various pre-trained diffusion models.
翻译:近期扩散模型的发展推动了条件图像生成的进步,但在重建分布外图像(如医学图像中未见的肿瘤)时仍存在困难,引发"图像幻觉"并可能导致误诊。我们假设这种幻觉源于条件图像中的局部分布外区域,并通过实验验证:将分布外区域分割并进行独立图像生成可缓解多个应用中的幻觉现象。基于此,我们提出一种无需训练的扩散框架,通过多局部扩散过程减少幻觉。本方法包含分布外估计及两个模块:"分支"模块在分布外区域内外分别进行局部生成,"融合"模块将这些预测整合为统一输出。实验表明,与基线模型相比,本方法在定量和定性上均能缓解幻觉,在真实医学与自然图像数据集上分别将误诊率降低40%和25%。该方法还展现出与多种预训练扩散模型的兼容性。