Visual prompt tuning (VPT) is a promising solution incorporating learnable prompt tokens to customize pre-trained models for downstream tasks. However, VPT and its variants often encounter challenges like prompt initialization, prompt length, and subpar performance in self-supervised pretraining, hindering successful contextual adaptation. This study commences by exploring the correlation evolvement between prompts and patch tokens during proficient training. Inspired by the observation that the prompt tokens tend to share high mutual information with patch tokens, we propose initializing prompts with downstream token prototypes. The strategic initialization, a stand-in for the previous initialization, substantially improves performance in fine-tuning. To refine further, we optimize token construction with a streamlined pipeline that maintains excellent performance with almost no increase in computational expenses compared to VPT. Exhaustive experiments show our proposed approach outperforms existing methods by a remarkable margin. For instance, it surpasses full fine-tuning in 19 out of 24 tasks, using less than 0.4% of learnable parameters on the FGVC and VTAB-1K benchmarks. Notably, our method significantly advances the adaptation for self-supervised pretraining, achieving impressive task performance gains of at least 10% to 30%. Besides, the experimental results demonstrate the proposed SPT is robust to prompt lengths and scales well with model capacity and training data size. We finally provide an insightful exploration into the amount of target data facilitating the adaptation of pre-trained models to downstream tasks. The code is available at https://github.com/WangYZ1608/Self-Prompt-Tuning.
翻译:视觉提示调整(Visual Prompt Tuning, VPT)是一种有前景的解决方案,通过引入可学习的提示令牌来定制预训练模型以适应下游任务。然而,VPT及其变体常面临提示初始化、提示长度以及在自监督预训练中表现欠佳等挑战,阻碍了有效的上下文适应。本研究首先探索了在高效训练过程中提示与图像块令牌之间相关性的演变规律。受观察到的提示令牌倾向于与图像块令牌共享高互信息的启发,我们提出用下游令牌原型来初始化提示。这种策略性初始化作为先前初始化方法的替代,显著提升了微调性能。为进一步优化,我们采用精简流程优化令牌构建,在几乎不增加计算开销的情况下,相比VPT保持了优异性能。大量实验表明,我们提出的方法以显著优势超越了现有方法。例如,在FGVC和VTAB-1K基准测试的24个任务中,该方法在19个任务上超越了全微调,仅使用了不到0.4%的可学习参数。值得注意的是,我们的方法显著推进了自监督预训练的适应过程,在任务性能上实现了至少10%至30%的惊人提升。此外,实验结果证明,所提出的SPT对提示长度具有鲁棒性,并能随模型容量和训练数据规模良好扩展。最后,我们深入探讨了促进预训练模型适应下游任务所需的目标数据量。代码已开源至https://github.com/WangYZ1608/Self-Prompt-Tuning。