Most existing methods for unsupervised domain adaptation (UDA) rely on a shared network to extract domain-invariant features. However, when facing multiple source domains, optimizing such a network involves updating the parameters of the entire network, making it both computationally expensive and challenging, particularly when coupled with min-max objectives. Inspired by recent advances in prompt learning that adapts high-capacity models for downstream tasks in a computationally economic way, we introduce Multi-Prompt Alignment (MPA), a simple yet efficient framework for multi-source UDA. Given a source and target domain pair, MPA first trains an individual prompt to minimize the domain gap through a contrastive loss. Then, MPA denoises the learned prompts through an auto-encoding process and aligns them by maximizing the agreement of all the reconstructed prompts. Moreover, we show that the resulting subspace acquired from the auto-encoding process can easily generalize to a streamlined set of target domains, making our method more efficient for practical usage. Extensive experiments show that MPA achieves state-of-the-art results on three popular datasets with an impressive average accuracy of 54.1% on DomainNet.
翻译:大多数现有的无监督域自适应方法依赖于共享网络来提取域不变特征。然而,当面对多个源域时,优化此类网络需要更新整个网络的参数,这既计算成本高昂又具有挑战性,尤其是在结合极小极大目标时。受近期提示学习进展的启发——该方法以计算高效的方式使高容量模型适应下游任务——我们引入了多提示对齐,这是一个用于多源无监督域自适应的简单而高效的框架。给定一个源域和目标域对,MPA首先训练一个单独的提示,通过对比损失最小化域间差距。然后,MPA通过自编码过程对学习到的提示进行去噪,并通过最大化所有重构提示之间的一致性来对齐它们。此外,我们证明从自编码过程中获得的子空间可以轻松推广到一组简化的目标域,这使得我们的方法在实际应用中更加高效。大量实验表明,MPA在三个流行数据集上取得了最先进的结果,在DomainNet上达到了54.1%的平均准确率,表现令人印象深刻。