Unsupervised domain adaptation tackles the problem that domain shifts between training and test data impair the performance of neural networks in many real-world applications. Thereby, in realistic scenarios, the source data may no longer be available during adaptation, and the label space of the target domain may differ from the source label space. This setting, known as source-free universal domain adaptation (SF-UniDA), has recently gained attention, but all existing approaches only assume a single domain shift from source to target. In this work, we present the first study on continual SF-UniDA, where the model must adapt sequentially to a stream of multiple different unlabeled target domains. Building upon our previous methods for online SF-UniDA, we combine their key ideas by integrating Gaussian mixture model-based pseudo-labeling within a mean teacher framework for improved stability over long adaptation sequences. Additionally, we introduce consistency losses for further robustness. The resulting method GMM-COMET provides a strong first baseline for continual SF-UniDA and is the only approach in our experiments to consistently improve upon the source-only model across all evaluated scenarios. Our code is available at https://github.com/pascalschlachter/GMM-COMET.
翻译:无监督域自适应旨在解决训练数据与测试数据之间的域偏移在众多实际应用中损害神经网络性能的问题。在此类现实场景中,源数据在自适应过程中可能已不可用,且目标域的标签空间可能与源标签空间存在差异。这一设定被称为无源通用域自适应(SF-UniDA),近年来已受到关注,但现有方法均仅假设从源域到目标域存在单一域偏移。本工作首次对持续SF-UniDA展开研究,在该设定下模型需顺序适应一系列不同的未标注目标域。基于我们先前提出的在线SF-UniDA方法,我们融合其核心思想,将基于高斯混合模型的伪标注整合到均值教师框架中,以提升长序列自适应过程的稳定性。此外,我们引入了若干一致性损失以进一步增强鲁棒性。所提出的方法GMM-COMET为持续SF-UniDA建立了首个强基准,并在我们实验的所有评估场景中,成为唯一能持续超越纯源域模型的方案。代码已发布于 https://github.com/pascalschlachter/GMM-COMET。