Source-free domain adaptation (SFDA) tackles the critical challenge of adapting source-pretrained models to unlabeled target domains without access to source data, overcoming data privacy and storage limitations in real-world applications. However, existing SFDA approaches struggle with the trade-off between perception field and computational efficiency in domain-invariant feature learning. Recently, Mamba has offered a promising solution through its selective scan mechanism, which enables long-range dependency modeling with linear complexity. However, the Visual Mamba (i.e., VMamba) remains limited in capturing channel-wise frequency characteristics critical for domain alignment and maintaining spatial robustness under significant domain shifts. To address these, we propose a framework called SfMamba to fully explore the stable dependency in source-free model transfer. SfMamba introduces Channel-wise Visual State-Space block that enables channel-sequence scanning for domain-invariant feature extraction. In addition, SfMamba involves a Semantic-Consistent Shuffle strategy that disrupts background patch sequences in 2D selective scan while preserving prediction consistency to mitigate error accumulation. Comprehensive evaluations across multiple benchmarks show that SfMamba achieves consistently stronger performance than existing methods while maintaining favorable parameter efficiency, offering a practical solution for SFDA. Our code is available at https://github.com/chenxi52/SfMamba.
翻译:无源域自适应旨在解决一个关键挑战:在无法访问源数据的情况下,将源预训练模型适配到未标记的目标域,从而克服实际应用中数据隐私和存储的限制。然而,现有的无源域自适应方法在域不变特征学习中,难以权衡感知范围与计算效率。近期,Mamba通过其选择性扫描机制提供了一个有前景的解决方案,该机制能以线性复杂度实现长程依赖建模。然而,视觉Mamba(即VMamba)在捕获对域对齐至关重要的通道级频率特性,以及在显著域偏移下保持空间鲁棒性方面仍存在局限。为解决这些问题,我们提出了一个名为SfMamba的框架,以充分探索无源模型迁移中的稳定依赖关系。SfMamba引入了通道级视觉状态空间块,支持通道序列扫描以进行域不变特征提取。此外,SfMamba采用了一种语义一致打乱策略,该策略在二维选择性扫描中打乱背景补丁序列,同时保持预测一致性以减轻误差累积。在多个基准测试上的综合评估表明,SfMamba在保持良好参数效率的同时,持续取得了优于现有方法的性能,为无源域自适应提供了一个实用的解决方案。我们的代码发布于https://github.com/chenxi52/SfMamba。