In the past few years, large-scale pre-trained vision-language models like CLIP have achieved tremendous success in various fields. Naturally, how to transfer the rich knowledge in such huge pre-trained models to downstream tasks and datasets becomes a hot topic. During downstream adaptation, the most challenging problems are overfitting and catastrophic forgetting, which can cause the model to overly focus on the current data and lose more crucial domain-general knowledge. Existing works use classic regularization techniques to solve the problems. As solutions become increasingly complex, the ever-growing storage and inference costs are also a significant problem that urgently needs to be addressed. While in this paper, we start from an observation that proper random noise can suppress overfitting and catastrophic forgetting. Then we regard quantization error as a kind of noise, and explore quantization for regularizing vision-language model, which is quite efficiency and effective. Furthermore, to improve the model's generalization capability while maintaining its specialization capacity at minimal cost, we deeply analyze the characteristics of the weight distribution in prompts, conclude several principles for quantization module design and follow such principles to create several competitive baselines. The proposed method is significantly efficient due to its inherent lightweight nature, making it possible to adapt on extremely resource-limited devices. Our method can be fruitfully integrated into many existing approaches like MaPLe, enhancing accuracy while reducing storage overhead, making it more powerful yet versatile. Extensive experiments on 11 datasets shows great superiority of our method sufficiently. Code is available at https://github.com/beyondhtx/QPrompt.
翻译:近年来,CLIP等大规模预训练视觉语言模型在各领域取得了巨大成功。如何将此类海量预训练模型中的丰富知识迁移至下游任务和数据集,自然成为研究热点。在下游适应过程中,最突出的挑战是过拟合与灾难性遗忘问题,这会导致模型过度关注当前数据而丢失更关键的领域通用知识。现有研究多采用经典正则化技术解决这些问题,但随着解决方案日益复杂,持续增长的存储与推理成本也成为亟待解决的重要问题。本文从"适当随机噪声可抑制过拟合与灾难性遗忘"的观察出发,将量化误差视为一种噪声形式,探索通过量化实现视觉语言模型正则化的方法,该方法兼具高效性与有效性。为进一步以最小代价提升模型泛化能力同时保持其特化能力,我们深入分析了提示词权重分布特征,总结出量化模块设计的若干原则,并依据这些原则构建了多个具有竞争力的基线模型。该方法因其固有的轻量化特性而显著高效,使得在资源极度受限的设备上进行模型适配成为可能。本方法可有效集成至MaPLe等现有方法中,在降低存储开销的同时提升准确率,增强模型性能与通用性。在11个数据集上的大量实验充分证明了本方法的优越性。代码已开源:https://github.com/beyondhtx/QPrompt。