Recent advances in fine-tuning Vision-Language Models (VLMs) have witnessed the success of prompt tuning and adapter tuning, while the classic model fine-tuning on inherent parameters seems to be overlooked. It is believed that fine-tuning the parameters of VLMs with few-shot samples corrupts the pre-trained knowledge since fine-tuning the CLIP model even degrades performance. In this paper, we revisit this viewpoint, and propose a new perspective: fine-tuning the specific parameters instead of all will uncover the power of classic model fine-tuning on VLMs. Through our meticulous study, we propose ClipFit, a simple yet effective method to fine-tune CLIP without introducing any overhead of extra parameters. We demonstrate that by only fine-tuning the specific bias terms and normalization layers, ClipFit can improve the performance of zero-shot CLIP by 7.27\% average harmonic mean accuracy. Lastly, to understand how fine-tuning in CLIPFit affects the pre-trained models, we conducted extensive experimental analyses w.r.t. changes in internal parameters and representations. We found that low-level text bias layers and the first layer normalization layer change much more than other layers. The code is available at \url{https://github.com/minglllli/CLIPFit}.
翻译:近年来,视觉语言模型(VLM)微调领域见证了提示调优与适配器调优的成功,而基于固有参数的经典模型微调方法似乎被忽视了。人们普遍认为,使用少量样本对VLM参数进行微调会破坏预训练知识,因为即使对CLIP模型进行微调也会导致性能下降。本文重新审视了这一观点,并提出新视角:仅微调特定参数而非全部参数,将能释放经典模型微调方法在VLM上的潜力。通过细致研究,我们提出了ClipFit——一种无需引入额外参数开销即可微调CLIP的简洁高效方法。实验表明,仅通过微调特定偏置项与归一化层,ClipFit能将零样本CLIP的平均调和均值准确率提升7.27%。最后,为理解ClipFit中的微调如何影响预训练模型,我们针对内部参数与表征的变化开展了广泛的实验分析,发现低级文本偏置层与首层归一化层的变化幅度远大于其他层。代码已发布于 \url{https://github.com/minglllli/CLIPFit}。