Selecting or designing an appropriate domain adaptation algorithm for a given problem remains challenging. This paper presents a Transformer model that can provably approximate and opt for domain adaptation methods for a given dataset in the in-context learning framework, where a foundation model performs new tasks without updating its parameters at test time. Specifically, we prove that Transformers can approximate instance-based and feature-based unsupervised domain adaptation algorithms and automatically select an algorithm suited for a given dataset. Numerical results indicate that in-context learning demonstrates an adaptive domain adaptation surpassing existing methods.
翻译:为给定问题选择或设计合适的领域自适应算法仍然具有挑战性。本文提出了一种Transformer模型,该模型在上下文学习框架下可证明地能够近似并为给定数据集选择领域自适应方法;在该框架中,基础模型在测试时无需更新其参数即可执行新任务。具体而言,我们证明了Transformer能够近似基于实例和基于特征的无监督领域自适应算法,并自动选择适合给定数据集的算法。数值结果表明,上下文学习展现出超越现有方法的自适应领域自适应能力。