Accurate segmentation of cell nuclei in histopathology images is essential for numerous biomedical research and clinical applications. However, existing cell nucleus segmentation methods only consider a single dataset (i.e., primary domain), while neglecting to leverage supplementary data from diverse sources (i.e., auxiliary domains) to reduce overfitting and enhance the performance. Although incorporating multiple datasets could alleviate overfitting, it often exacerbates performance drops caused by domain shifts. In this work, we introduce Adversarial Multi-domain Alignment of Segment Anything Model (AMA-SAM) that extends the Segment Anything Model (SAM) to overcome these obstacles through two key innovations. First, we propose a Conditional Gradient Reversal Layer (CGRL), a multi-domain alignment module that harmonizes features from diverse domains to promote domain-invariant representation learning while preserving crucial discriminative features for the primary dataset. Second, we address SAM's inherent low-resolution output by designing a High-Resolution Decoder (HR-Decoder), which directly produces fine-grained segmentation maps in order to capture intricate nuclei boundaries in high-resolution histology images. To the best of our knowledge, this is the first attempt to adapt SAM for multi-dataset learning with application to histology nuclei segmentation. We validate our method on several publicly available datasets, demonstrating consistent and significant improvements over state-of-the-art approaches.
翻译:组织病理学图像中细胞核的精确分割对于众多生物医学研究和临床应用至关重要。然而,现有的细胞核分割方法仅考虑单一数据集(即主域),而忽略了利用来自不同来源的补充数据(即辅助域)来减少过拟合并提升性能。尽管整合多个数据集可缓解过拟合,但域偏移导致的性能下降问题往往因此加剧。本工作中,我们提出了对抗性多域对齐Segment Anything模型(AMA-SAM),通过两项关键创新扩展了Segment Anything模型(SAM)以克服这些障碍。首先,我们提出条件梯度反转层(CGRL),这是一个多域对齐模块,可协调来自不同域的特征以促进域不变表示学习,同时为主数据集保留关键判别性特征。其次,我们针对SAM固有的低分辨率输出问题,设计了高分辨率解码器(HR-Decoder),该解码器可直接生成细粒度分割图,以捕捉高分辨率组织学图像中复杂的细胞核边界。据我们所知,这是首次尝试将SAM适配于多数据集学习并应用于组织学细胞核分割任务。我们在多个公开数据集上验证了所提方法,结果表明其相较于现有先进方法取得了持续且显著的性能提升。