Creating in-silico data with generative AI promises a cost-effective alternative to staining, imaging, and annotating whole slide images in computational pathology. Diffusion models are the state-of-the-art solution for generating in-silico images, offering unparalleled fidelity and realism. Using appearance transfer diffusion models allows for zero-shot image generation, facilitating fast application and making model training unnecessary. However current appearance transfer diffusion models are designed for natural images, where the main task is to transfer the foreground object from an origin to a target domain, while the background is of insignificant importance. In computational pathology, specifically in oncology, it is however not straightforward to define which objects in an image should be classified as foreground and background, as all objects in an image may be of critical importance for the detailed understanding the tumor micro-environment. We contribute to the applicability of appearance transfer diffusion models to immunohistochemistry-stained images by modifying the appearance transfer guidance to alternate between class-specific AdaIN feature statistics matchings using existing segmentation masks. The performance of the proposed method is demonstrated on the downstream task of supervised epithelium segmentation, showing that the number of manual annotations required for model training can be reduced by 75%, outperforming the baseline approach. Additionally, we consulted with a certified pathologist to investigate future improvements. We anticipate this work to inspire the application of zero-shot diffusion models in computational pathology, providing an efficient method to generate in-silico images with unmatched fidelity and realism, which prove meaningful for downstream tasks, such as training existing deep learning models or finetuning foundation models.
翻译:利用生成式人工智能创建硅基数据,为计算病理学中的全切片图像染色、成像和标注提供了一种经济高效的替代方案。扩散模型是目前生成硅基图像的最先进解决方案,具有无与伦比的保真度和真实感。采用外观迁移扩散模型可实现零样本图像生成,便于快速应用且无需模型训练。然而,当前的外观迁移扩散模型主要针对自然图像设计,其核心任务是将前景对象从源域迁移至目标域,而背景信息的重要性较低。在计算病理学(特别是肿瘤学领域)中,图像中哪些对象应归类为前景与背景并非显而易见,因为图像中的所有结构对于深入理解肿瘤微环境都可能至关重要。我们通过改进外观迁移引导机制,使其能够利用现有分割掩码在特定类别的AdaIN特征统计匹配之间交替切换,从而提升了外观迁移扩散模型在免疫组织化学染色图像上的适用性。所提方法的性能在下游任务(监督式上皮分割)中得到验证,结果表明模型训练所需的人工标注数量可减少75%,其效果优于基线方法。此外,我们咨询了认证病理学家以探讨未来改进方向。我们预计这项工作将推动零样本扩散模型在计算病理学中的应用,为生成具有卓越保真度和真实感的硅基图像提供高效方法,这些图像对于训练现有深度学习模型或微调基础模型等下游任务具有实际意义。