Medical Vision Foundation Models (Med-VFMs) have superior capabilities of interpreting medical images due to the knowledge learned from self-supervised pre-training with extensive unannotated images. To improve their performance on adaptive downstream evaluations, especially segmentation, a few samples from target domains are selected randomly for fine-tuning them. However, there lacks works to explore the way of adapting Med-VFMs to achieve the optimal performance on target domains efficiently. Thus, it is highly demanded to design an efficient way of fine-tuning Med-VFMs by selecting informative samples to maximize their adaptation performance on target domains. To achieve this, we propose an Active Source-Free Domain Adaptation (ASFDA) method to efficiently adapt Med-VFMs to target domains for volumetric medical image segmentation. This ASFDA employs a novel Active Learning (AL) method to select the most informative samples from target domains for fine-tuning Med-VFMs without the access to source pre-training samples, thus maximizing their performance with the minimal selection budget. In this AL method, we design an Active Test Time Sample Query strategy to select samples from the target domains via two query metrics, including Diversified Knowledge Divergence (DKD) and Anatomical Segmentation Difficulty (ASD). DKD is designed to measure the source-target knowledge gap and intra-domain diversity. It utilizes the knowledge of pre-training to guide the querying of source-dissimilar and semantic-diverse samples from the target domains. ASD is designed to evaluate the difficulty in segmentation of anatomical structures by measuring predictive entropy from foreground regions adaptively. Additionally, our ASFDA method employs a Selective Semi-supervised Fine-tuning to improve the performance and efficiency of fine-tuning by identifying samples with high reliability from unqueried ones.
翻译:医学视觉基础模型(Med-VFMs)通过从大量无标注图像的自监督预训练中学习知识,具备卓越的医学图像解析能力。为提升其在自适应下游任务(尤其是分割任务)上的表现,通常从目标域中随机选取少量样本进行微调。然而,目前尚缺乏探索如何高效地将Med-VFMs适配至目标域以获得最优性能的研究。因此,亟需设计一种高效的Med-VFMs微调方法,通过选择信息量丰富的样本来最大化其在目标域的自适应性能。为此,我们提出一种主动无源域自适应方法,用于高效地将Med-VFMs适配至目标域以进行三维医学图像分割。该方法采用一种新颖的主动学习方法,在无需访问源域预训练样本的前提下,从目标域中选择信息量最大的样本对Med-VFMs进行微调,从而以最小选择成本实现性能最大化。在此主动学习方法中,我们设计了一种主动测试时样本查询策略,通过两个查询指标从目标域选择样本:多样化知识差异与解剖结构分割难度。DKD旨在衡量源域与目标域间的知识差距及域内多样性,利用预训练知识指导从目标域查询源域差异大且语义多样的样本。ASD通过自适应地测量前景区域的预测熵,评估解剖结构的分割难度。此外,我们的方法采用选择性半监督微调策略,通过从未查询样本中识别高可靠性样本,提升微调的性能与效率。