The success of large language models has inspired the computer vision community to explore image segmentation foundation model that is able to zero/few-shot generalize through prompt engineering. Segment-Anything(SAM), among others, is the state-of-the-art image segmentation foundation model demonstrating strong zero/few-shot generalization. Despite the success, recent studies reveal the weakness of SAM under strong distribution shift. In particular, SAM performs awkwardly on corrupted natural images, camouflaged images, medical images, etc. Motivated by the observations, we aim to develop a self-training based strategy to adapt SAM to target distribution. Given the unique challenges of large source dataset, high computation cost and incorrect pseudo label, we propose a weakly supervised self-training architecture with anchor regularization and low-rank finetuning to improve the robustness and computation efficiency of adaptation. We validate the effectiveness on 5 types of downstream segmentation tasks including natural clean/corrupted images, medical images, camouflaged images and robotic images. Our proposed method is task-agnostic in nature and outperforms pre-trained SAM and state-of-the-art domain adaptation methods on almost all downstream tasks with the same testing prompt inputs.
翻译:大型语言模型的成功启发了计算机视觉领域探索能够通过提示工程实现零/少样本泛化的图像分割基础模型。其中,Segment-Anything(SAM)作为最先进的图像分割基础模型,展现了强大的零/少样本泛化能力。尽管取得了成功,但近期研究揭示了SAM在强分布偏移下的局限性——特别是在处理退化自然图像、伪装图像、医学图像等场景时表现不佳。受此观察启发,我们旨在开发基于自训练的策略以适应目标分布。针对源数据集规模庞大、计算成本高昂及伪标签不准确等独特挑战,我们提出了一种结合锚点正则化与低秩微调的弱监督自训练架构,以提升自适应的鲁棒性与计算效率。我们在5类下游分割任务(包括自然干净/退化图像、医学图像、伪装图像和机器人图像)上验证了有效性。本方法本质上具有任务无关性,在采用相同测试提示输入的情况下,几乎在所有下游任务上均超越了预训练SAM及当前最先进的域自适应方法。