Transferring knowledge from a source domain to a target domain can be crucial for whole slide image classification, since the number of samples in a dataset is often limited due to high annotation costs. However, domain shift and task discrepancy between datasets can hinder effective knowledge transfer. In this paper, we propose a Target-Aware Knowledge Transfer framework, employing a teacher-student paradigm. Our framework enables the teacher model to learn common knowledge from the source and target domains by actively incorporating unlabelled target images into the training of the teacher model. The teacher bag features are subsequently adapted to supervise the training of the student model on the target domain. Despite incorporating the target features during training, the teacher model tends to overlook them under the inherent domain shift and task discrepancy. To alleviate this, we introduce a target-aware feature alignment module to establish a transferable latent relationship between the source and target features by solving the optimal transport problem. Experimental results show that models employing knowledge transfer outperform those trained from scratch, and our method achieves state-of-the-art performance among other knowledge transfer methods on various datasets, including TCGA-RCC, TCGA-NSCLC, and Camelyon16.
翻译:将知识从源域迁移到目标域对于全切片图像分类至关重要,因为标注成本高昂导致数据集中样本数量通常有限。然而,数据集间的域偏移和任务差异会阻碍有效的知识迁移。本文提出一种采用师生范式的目标感知知识迁移框架。该框架通过主动将未标注的目标图像纳入教师模型训练,使教师模型能够从源域和目标域学习共性知识。随后,教师模型的包特征被适配用于监督目标域上学生模型的训练。尽管在训练过程中引入了目标特征,但在固有的域偏移和任务差异下,教师模型仍容易忽略这些特征。为缓解此问题,我们引入目标感知特征对齐模块,通过求解最优传输问题,在源域与目标域特征间建立可迁移的潜在关联。实验结果表明,采用知识迁移的模型性能优于从头训练的模型,且我们的方法在多个数据集(包括TCGA-RCC、TCGA-NSCLC和Camelyon16)上均优于其他知识迁移方法,达到当前最优性能。