Recent successes suggest that parameter-efficient fine-tuning of foundation models as the state-of-the-art method for transfer learning in vision, replacing the rich literature of alternatives such as meta-learning. In trying to harness the best of both worlds, meta-tuning introduces a subsequent optimization stage of foundation models but has so far only shown limited success and crucially tends to underperform on out-of-distribution (OOD) tasks. In this paper, we introduce Sparse MetA-Tuning (SMAT), a method inspired by sparse mixture-of-experts approaches and trained to isolate subsets of pre-trained parameters automatically for meta-tuning on each task. SMAT successfully overcomes OOD sensitivity and delivers on the promise of enhancing the transfer abilities of vision foundation models beyond parameter-efficient fine-tuning. We establish new state-of-the-art results on a challenging combination of Meta-Dataset augmented with additional OOD tasks in both zero-shot and gradient-based adaptation settings. In addition, we provide a thorough analysis of the superiority of learned over hand-designed sparsity patterns for sparse expert methods and the pivotal importance of the sparsity level in balancing between in-distribution and out-of-distribution generalization. Our code is publicly available.
翻译:近期研究表明,参数高效微调已成为视觉领域迁移学习的最先进方法,取代了元学习等丰富的替代方案文献。为融合两者的优势,元调优引入了基础模型的后续优化阶段,但迄今为止仅显示出有限的成功,且关键是在分布外任务上表现不佳。本文提出稀疏元调优方法,该方法受稀疏专家混合方法启发,通过训练自动隔离预训练参数子集以实现针对每个任务的元调优。SMAT成功克服了分布外敏感性,实现了超越参数高效微调的基础视觉模型迁移能力提升。我们在包含额外分布外任务的增强版元数据集上,建立了零样本和基于梯度适应的双重设定下的最新最优结果。此外,我们深入分析了学习所得稀疏模式相对于人工设计模式在稀疏专家方法中的优越性,并阐明了稀疏度水平在平衡分布内与分布外泛化性能中的关键作用。代码已公开。