Diffusion models have enabled remarkably high-quality medical image generation, yet it is challenging to enforce anatomical constraints in generated images. To this end, we propose a diffusion model-based method that supports anatomically-controllable medical image generation, by following a multi-class anatomical segmentation mask at each sampling step. We additionally introduce a random mask ablation training algorithm to enable conditioning on a selected combination of anatomical constraints while allowing flexibility in other anatomical areas. We compare our method ("SegGuidedDiff") to existing methods on breast MRI and abdominal/neck-to-pelvis CT datasets with a wide range of anatomical objects. Results show that our method reaches a new state-of-the-art in the faithfulness of generated images to input anatomical masks on both datasets, and is on par for general anatomical realism. Finally, our model also enjoys the extra benefit of being able to adjust the anatomical similarity of generated images to real images of choice through interpolation in its latent space. SegGuidedDiff has many applications, including cross-modality translation, and the generation of paired or counterfactual data. Our code is available at https://github.com/mazurowski-lab/segmentation-guided-diffusion.
翻译:扩散模型已能实现高质量的医学图像生成,但在生成图像中强制施加解剖学约束仍具挑战性。为此,我们提出一种基于扩散模型的方法,通过在每个采样步骤遵循多类别解剖分割掩码,实现解剖结构可控的医学图像生成。我们进一步引入随机掩码消融训练算法,使其能在选定解剖约束组合条件下生成图像,同时保持其他解剖区域的灵活性。我们在包含多种解剖结构的乳腺MRI及腹部/颈-盆腔CT数据集上,将所提方法("SegGuidedDiff")与现有方法进行比较。结果表明,在两个数据集上,本方法在生成图像对输入解剖掩码的忠实度方面均达到新的最优水平,在整体解剖真实性方面亦达到同等性能。此外,本模型还具备额外优势:可通过其隐空间中的插值操作,调整生成图像与选定真实图像之间的解剖相似度。SegGuidedDiff具有广泛的应用前景,包括跨模态转换、配对数据或反事实数据的生成等。代码已开源:https://github.com/mazurowski-lab/segmentation-guided-diffusion。