Automated cell segmentation in microscopy images is essential for biomedical research, yet conventional methods are labor-intensive and prone to error. While deep learning-based approaches have proven effective, they often require large annotated datasets, which are scarce due to the challenges of manual annotation. To overcome this, we propose a novel framework for synthesizing densely annotated 2D and 3D cell microscopy images using cascaded diffusion models. Our method synthesizes 2D and 3D cell masks from sparse 2D annotations using multi-level diffusion models and NeuS, a 3D surface reconstruction approach. Following that, a pretrained 2D Stable Diffusion model is finetuned to generate realistic cell textures and the final outputs are combined to form cell populations. We show that training a segmentation model with a combination of our synthetic data and real data improves cell segmentation performance by up to 9\% across multiple datasets. Additionally, the FID scores indicate that the synthetic data closely resembles real data. The code for our proposed approach will be available at https://github.com/ruveydayilmaz0/cascaded\_diffusion.
翻译:显微图像中的自动化细胞分割对于生物医学研究至关重要,然而传统方法劳动密集且易出错。尽管基于深度学习的方法已被证明有效,但它们通常需要大量标注数据集,而由于人工标注的挑战,此类数据十分稀缺。为克服此问题,我们提出了一种新颖的框架,利用级联扩散模型合成具有密集标注的二维与三维细胞显微图像。我们的方法使用多级扩散模型和NeuS(一种三维表面重建方法),从稀疏的二维标注中合成二维与三维细胞掩码。随后,对预训练的二维Stable Diffusion模型进行微调以生成逼真的细胞纹理,并将最终输出组合形成细胞群体。我们证明,使用我们的合成数据与真实数据组合训练分割模型,可在多个数据集上将细胞分割性能提升高达9%。此外,FID分数表明合成数据与真实数据高度相似。我们提出的方法代码将在 https://github.com/ruveydayilmaz0/cascaded\_diffusion 提供。