In computational pathology, deep learning (DL) models for tasks such as segmentation or tissue classification are known to suffer from domain shifts due to different staining techniques. Stain adaptation aims to reduce the generalization error between different stains by training a model on source stains that generalizes to target stains. Despite the abundance of target stain data, a key challenge is the lack of annotations. To address this, we propose a joint training between artificially labeled and unlabeled data including all available stained images called Unsupervised Latent Stain Adaptation (ULSA). Our method uses stain translation to enrich labeled source images with synthetic target images in order to increase the supervised signals. Moreover, we leverage unlabeled target stain images using stain-invariant feature consistency learning. With ULSA we present a semi-supervised strategy for efficient stain adaptation without access to annotated target stain data. Remarkably, ULSA is task agnostic in patch-level analysis for whole slide images (WSIs). Through extensive evaluation on external datasets, we demonstrate that ULSA achieves state-of-the-art (SOTA) performance in kidney tissue segmentation and breast cancer classification across a spectrum of staining variations. Our findings suggest that ULSA is an important framework for stain adaptation in computational pathology.
翻译:在计算病理学中,用于分割或组织分类等任务的深度学习模型已知会因不同染色技术而遭受域偏移问题。染色自适应旨在通过训练一个在源染色上学习、并能泛化至目标染色的模型,以减少不同染色间的泛化误差。尽管目标染色数据丰富,但一个关键挑战在于缺乏标注信息。为此,我们提出一种联合训练方法,利用包含所有可用染色图像的人工标注数据与未标注数据,称为无监督潜在染色自适应方法。该方法通过染色转换技术,用合成目标图像增强标注源图像,以提升监督信号强度。此外,我们利用染色不变特征一致性学习来挖掘未标注目标染色图像的潜力。ULSA提出了一种无需标注目标染色数据的半监督染色自适应高效策略。值得注意的是,在全切片图像分析中,ULSA在任务层面具有与具体任务无关的通用性。通过对多个外部数据集的广泛评估,我们证明ULSA在肾脏组织分割和乳腺癌分类任务中,针对一系列染色变异均取得了最先进的性能。我们的研究结果表明,ULSA是计算病理学中染色自适应的重要框架。