In real-world applications, there is often a domain shift from training to test data. This observation resulted in the development of test-time adaptation (TTA). It aims to adapt a pre-trained source model to the test data without requiring access to the source data. Thereby, most existing works are limited to the closed-set assumption, i.e. there is no category shift between source and target domain. We argue that in a realistic open-world setting a category shift can appear in addition to a domain shift. This means, individual source classes may not appear in the target domain anymore, samples of new classes may be part of the target domain or even both at the same time. Moreover, in many real-world scenarios the test data is not accessible all at once but arrives sequentially as a stream of batches demanding an immediate prediction. Hence, TTA must be applied in an online manner. To the best of our knowledge, the combination of these aspects, i.e. online source-free universal domain adaptation (online SF-UniDA), has not been studied yet. In this paper, we introduce a Contrastive Mean Teacher (COMET) tailored to this novel scenario. It applies a contrastive loss to rebuild a feature space where the samples of known classes build distinct clusters and the samples of new classes separate well from them. It is complemented by an entropy loss which ensures that the classifier output has a small entropy for samples of known classes and a large entropy for samples of new classes to be easily detected and rejected as unknown. To provide the losses with reliable pseudo labels, they are embedded into a mean teacher (MT) framework. We evaluate our method across two datasets and all category shifts to set an initial benchmark for online SF-UniDA. Thereby, COMET yields state-of-the-art performance and proves to be consistent and robust across a variety of different scenarios.
翻译:在现实应用中,训练数据与测试数据之间通常存在领域偏移。这一观察促使了测试时自适应(TTA)技术的发展,其目标是在无需访问源数据的情况下,使预训练的源模型适应测试数据。然而,现有工作大多局限于封闭集假设,即源域与目标域之间不存在类别偏移。我们认为,在真实开放世界场景中,除了领域偏移外,还可能存在类别偏移。这意味着,部分源类别可能不再出现在目标域中,新类别样本可能成为目标域的一部分,甚至两者同时发生。此外,在许多现实场景中,测试数据并非一次性全部获取,而是以流式批次顺序到达,要求模型即时预测。因此,TTA必须以在线方式应用。据我们所知,这些方面的组合(即在线无源通用域适应,简称在线SF-UniDA)此前尚未被研究。本文提出了一种针对该新场景定制的对比均值教师(COMET)方法。它通过对比损失重建特征空间,使已知类别的样本形成清晰聚类,而新类别样本则与之有效分离。同时,结合熵损失确保分类器对已知类别样本输出低熵,对新类别样本输出高熵,从而便于将其识别并拒绝为未知类别。为了为损失函数提供可靠的伪标签,这些损失被嵌入均值教师(MT)框架中。我们在两个数据集及所有类别偏移场景下评估了该方法,为在线SF-UniDA设立了初始基准。实验表明,COMET取得了最先进的性能,并在多种不同场景下展现出一致性与鲁棒性。