Segment Anything Models (SAMs), as vision foundation models, have demonstrated remarkable performance across various image analysis tasks. Despite their strong generalization capabilities, SAMs encounter challenges in fine-grained detail segmentation for high-resolution class-independent segmentation (HRCS), due to the limitations in the direct processing of high-resolution inputs and low-resolution mask predictions, and the reliance on accurate manual prompts. To address these limitations, we propose MGD-SAM2 which integrates SAM2 with multi-view feature interaction between a global image and local patches to achieve precise segmentation. MGD-SAM2 incorporates the pre-trained SAM2 with four novel modules: the Multi-view Perception Adapter (MPAdapter), the Multi-view Complementary Enhancement Module (MCEM), the Hierarchical Multi-view Interaction Module (HMIM), and the Detail Refinement Module (DRM). Specifically, we first introduce MPAdapter to adapt the SAM2 encoder for enhanced extraction of local details and global semantics in HRCS images. Then, MCEM and HMIM are proposed to further exploit local texture and global context by aggregating multi-view features within and across multi-scales. Finally, DRM is designed to generate gradually restored high-resolution mask predictions, compensating for the loss of fine-grained details resulting from directly upsampling the low-resolution prediction maps. Experimental results demonstrate the superior performance and strong generalization of our model on multiple high-resolution and normal-resolution datasets. Code will be available at https://github.com/sevenshr/MGD-SAM2.
翻译:作为视觉基础模型,Segment Anything模型(SAMs)在各种图像分析任务中展现了卓越的性能。尽管其泛化能力强,但由于直接处理高分辨率输入与低分辨率掩码预测的局限性,以及对精确人工提示的依赖,SAMs在高分辨率类别无关分割(HRCS)的细粒度细节分割方面面临挑战。为解决这些限制,我们提出了MGD-SAM2,该模型将SAM2与全局图像和局部图像块之间的多视角特征交互相结合,以实现精确分割。MGD-SAM2整合了预训练的SAM2与四个新模块:多视角感知适配器(MPAdapter)、多视角互补增强模块(MCEM)、分层多视角交互模块(HMIM)以及细节精炼模块(DRM)。具体而言,我们首先引入MPAdapter,以适配SAM2编码器,增强其对HRCS图像中局部细节和全局语义的提取能力。随后,提出MCEM和HMIM,通过聚合多尺度内及跨尺度的多视角特征,进一步利用局部纹理和全局上下文信息。最后,DRM被设计用于生成逐步恢复的高分辨率掩码预测,以补偿因直接上采样低分辨率预测图而导致的细粒度细节损失。实验结果表明,我们的模型在多个高分辨率及常规分辨率数据集上均表现出优越的性能和强大的泛化能力。代码将在https://github.com/sevenshr/MGD-SAM2 公开。