Domain shift in the field of histopathological imaging is a common phenomenon due to the intra- and inter-hospital variability of staining and digitization protocols. The implementation of robust models, capable of creating generalized domains, represents a need to be solved. In this work, a new domain adaptation method to deal with the variability between histopathological images from multiple centers is presented. In particular, our method adds a training constraint to the supervised contrastive learning approach to achieve domain adaptation and improve inter-class separability. Experiments performed on domain adaptation and classification of whole-slide images of six skin cancer subtypes from two centers demonstrate the method's usefulness. The results reflect superior performance compared to not using domain adaptation after feature extraction or staining normalization.
翻译:组织病理学成像领域中的领域偏移是一种常见现象,这源于染色和数字化方案在医院内部及医院之间的差异性。构建能够创建泛化领域的鲁棒模型,是当前亟待解决的问题。本研究提出了一种新的领域自适应方法,以处理来自多中心的组织病理学图像之间的变异性。具体而言,我们的方法在监督对比学习方法中增加了一项训练约束,以实现领域自适应并提升类间可分离性。在两个中心的六种皮肤癌亚型全切片图像的领域自适应与分类任务上进行的实验,证明了该方法的有效性。结果表明,与在特征提取或染色归一化后不使用领域自适应的方法相比,本方法具有更优越的性能。