Due to the increasing workload of pathologists, the need for automation to support diagnostic tasks and quantitative biomarker evaluation is becoming more and more apparent. Foundation models have the potential to improve generalizability within and across centers and serve as starting points for data efficient development of specialized yet robust AI models. However, the training foundation models themselves is usually very expensive in terms of data, computation, and time. This paper proposes a supervised training method that drastically reduces these expenses. The proposed method is based on multi-task learning to train a joint encoder, by combining 16 different classification, segmentation, and detection tasks on a total of 912,000 patches. Since the encoder is capable of capturing the properties of the samples, we term it the Tissue Concepts encoder. To evaluate the performance and generalizability of the Tissue Concepts encoder across centers, classification of whole slide images from four of the most prevalent solid cancers - breast, colon, lung, and prostate - was used. The experiments show that the Tissue Concepts model achieve comparable performance to models trained with self-supervision, while requiring only 6% of the amount of training patches. Furthermore, the Tissue Concepts encoder outperforms an ImageNet pre-trained encoder on both in-domain and out-of-domain data.
翻译:随着病理学家工作负担日益加重,对诊断任务和定量生物标志物评估进行自动化支持的需求愈发凸显。基础模型具备提升机构内及跨机构泛化能力的潜力,并能作为开发专业化且鲁棒的AI模型的数据高效起点。然而,基础模型本身的训练通常在数据、计算和时间方面成本极高。本文提出一种监督式训练方法,可大幅降低这些成本。该方法基于多任务学习框架,通过整合16项不同的分类、分割与检测任务(总计91.2万个图像块)来训练联合编码器。由于该编码器能够捕捉样本的组织学特性,我们将其命名为组织概念编码器。为评估组织概念编码器在跨中心场景下的性能与泛化能力,我们采用四种最常见实体癌(乳腺、结肠、肺和前列腺)的全切片图像分类任务进行验证。实验表明,组织概念模型仅需自监督模型6%的训练数据量即可达到与之相当的性能。此外,该编码器在域内与域外数据上的表现均优于ImageNet预训练编码器。