Recently, adapting pre-trained models to downstream tasks has attracted increasing interest. Previous Parameter-Efficient-Tuning (PET) methods regard the pre-trained model as an opaque Black Box model, relying purely on data-driven optimization and underutilizing their inherent prior knowledge. This oversight limits the models' potential for effective downstream task adaptation. To address these issues, we propose a novel black-whIte bOx prompT leArning framework (IOTA), which integrates a data-driven Black Box module with a knowledge-driven White Box module for downstream task adaptation. Specifically, the White Box module derives corrective knowledge by contrasting the wrong predictions with the right cognition. This knowledge is verbalized into interpretable human prompts and leveraged through a corrective knowledge-guided prompt selection strategy to guide the Black Box module toward more accurate predictions. By jointly leveraging knowledge- and data-driven learning signals, IOTA achieves effective downstream task adaptation. Experimental results on 12 image classification benchmarks under few-shot and easy-to-hard adaptation settings demonstrate the effectiveness of corrective knowledge and the superiority of our method over state-of-the-art methods.
翻译:近年来,将预训练模型适配至下游任务引起了越来越多的关注。先前的参数高效调优方法将预训练模型视为不透明的黑盒模型,仅依赖数据驱动的优化方式,未能充分利用其内在的先验知识。这种局限性制约了模型在下游任务适配中的潜力。为解决这些问题,我们提出了一种新颖的黑白盒提示学习框架,该框架将数据驱动的黑盒模块与知识驱动的白盒模块相结合,以实现下游任务适配。具体而言,白盒模块通过对比错误预测与正确认知来推导纠错知识。这些知识被转化为可解释的人类提示,并通过纠错知识引导的提示选择策略加以利用,以引导黑盒模块做出更准确的预测。通过联合利用知识驱动和数据驱动的学习信号,IOTA实现了有效的下游任务适配。在少样本和易到难适配设置下的12个图像分类基准测试结果表明,纠错知识的有效性以及本方法相较于现有最先进方法的优越性。