This paper addresses the critical issue of miscalibration in CLIP-based model adaptation, particularly in the challenging scenario of out-of-distribution (OOD) samples, which has been overlooked in the existing literature on CLIP adaptation. We empirically demonstrate that popular CLIP adaptation approaches, such as Adapters, Prompt Learning, and Test-Time Adaptation, substantially degrade the calibration capabilities of the zero-shot baseline in the presence of distributional drift. We identify the increase in logit ranges as the underlying cause of miscalibration of CLIP adaptation methods, contrasting with previous work on calibrating fully-supervised models. Motivated by these observations, we present a simple and model-agnostic solution to mitigate miscalibration, by scaling the logit range of each sample to its zero-shot prediction logits. We explore three different alternatives to achieve this, which can be either integrated during adaptation or directly used at inference time. Comprehensive experiments on popular OOD classification benchmarks demonstrate the effectiveness of the proposed approaches in mitigating miscalibration while maintaining discriminative performance, whose improvements are consistent across the three families of these increasingly popular approaches. The code is publicly available at: https://github.com/Bala93/CLIPCalib
翻译:本文针对基于CLIP的模型适配中存在的关键性校准失准问题展开研究,尤其关注分布外样本这一具有挑战性但现有CLIP适配文献中常被忽视的场景。我们通过实证研究表明,在存在分布漂移的情况下,当前流行的CLIP适配方法(如适配器、提示学习、测试时适配)会显著削弱零样本基线的校准能力。与先前针对全监督模型校准的研究不同,我们发现对数区间扩大是导致CLIP适配方法校准失准的根本原因。基于这些观察,我们提出一种简单且与模型无关的解决方案:通过将每个样本的对数区间缩放至其零样本预测对数范围来缓解校准失准问题。我们探讨了三种实现该目标的替代方案,这些方案既可在适配过程中集成,也可直接在推理阶段使用。在主流分布外分类基准上的综合实验表明,所提方法在保持判别性能的同时能有效缓解校准失准问题,且其改进效果在这三类日益流行的适配方法中均保持一致性。代码已公开于:https://github.com/Bala93/CLIPCalib