This study introduces Polyp-DDPM, a diffusion-based method for generating realistic images of polyps conditioned on masks, aimed at enhancing the segmentation of gastrointestinal (GI) tract polyps. Our approach addresses the challenges of data limitations, high annotation costs, and privacy concerns associated with medical images. By conditioning the diffusion model on segmentation masks-binary masks that represent abnormal areas-Polyp-DDPM outperforms state-of-the-art methods in terms of image quality (achieving a Frechet Inception Distance (FID) score of 78.47, compared to scores above 83.79) and segmentation performance (achieving an Intersection over Union (IoU) of 0.7156, versus less than 0.6694 for synthetic images from baseline models and 0.7067 for real data). Our method generates a high-quality, diverse synthetic dataset for training, thereby enhancing polyp segmentation models to be comparable with real images and offering greater data augmentation capabilities to improve segmentation models. The source code and pretrained weights for Polyp-DDPM are made publicly available at https://github.com/mobaidoctor/polyp-ddpm.
翻译:本研究提出了Polyp-DDPM,一种基于扩散的、以掩码为条件生成逼真息肉图像的方法,旨在增强胃肠道(GI)息肉的图像分割。该方法旨在解决医学图像领域普遍存在的数据有限、标注成本高昂以及隐私问题等挑战。通过将分割掩码(即表示异常区域的二值掩码)作为条件输入扩散模型,Polyp-DDPM在图像质量(取得了78.47的弗雷歇起始距离(FID)分数,而对比方法得分高于83.79)和分割性能(取得了0.7156的交并比(IoU),而基线模型生成的合成图像IoU低于0.6694,真实数据IoU为0.7067)方面均优于现有最先进方法。我们的方法能够生成高质量、多样化的合成数据集用于训练,从而使息肉分割模型的性能提升至可与真实图像相媲美的水平,并提供了更强的数据增强能力以改进分割模型。Polyp-DDPM的源代码和预训练权重已在 https://github.com/mobaidoctor/polyp-ddpm 公开提供。