Vision-language foundation models (e.g., CLIP) have shown remarkable performance across a wide range of tasks. However, deploying these models may be unreliable when significant distribution gaps exist between the training and test data. The training-free test-time dynamic adapter (TDA) is a promising approach to address this issue by storing representative test samples to guide the classification of subsequent ones. However, TDA only naively maintains a limited number of reference samples in the cache, leading to severe test-time catastrophic forgetting when the cache is updated by dropping samples. In this paper, we propose a simple yet effective method for DistributiOnal Test-time Adaptation (Dota). Instead of naively memorizing representative test samples, Dota continually estimates the distributions of test samples, allowing the model to continually adapt to the deployment environment. The test-time posterior probabilities are then computed using the estimated distributions based on Bayes' theorem for adaptation purposes. To further enhance the adaptability on the uncertain samples, we introduce a new human-in-the-loop paradigm which identifies uncertain samples, collects human-feedback, and incorporates it into the Dota framework. Extensive experiments validate that Dota enables CLIP to continually learn, resulting in a significant improvement compared to current state-of-the-art methods.
翻译:视觉语言基础模型(如CLIP)已在广泛任务中展现出卓越性能。然而,当训练数据与测试数据之间存在显著分布差异时,部署这些模型可能不可靠。免训练的测试时动态适配器(TDA)通过存储代表性测试样本来指导后续样本分类,是解决该问题的有效途径。但TDA仅简单地在缓存中维护有限数量的参考样本,当通过丢弃样本更新缓存时,会导致严重的测试时灾难性遗忘。本文提出一种简单而有效的分布式测试时自适应方法(Dota)。与简单记忆代表性测试样本不同,Dota持续估计测试样本的分布,使模型能够持续适应部署环境。随后基于贝叶斯定理,利用估计的分布计算测试时后验概率以实现自适应。为进一步增强对不确定样本的适应能力,我们引入一种新的人机协同范式,该范式可识别不确定样本、收集人工反馈,并将其整合到Dota框架中。大量实验证明,Dota能使CLIP持续学习,相比当前最先进方法取得显著提升。