Continuous prompt search offers a computationally efficient alternative to conventional parameter tuning in natural language processing tasks. Nevertheless, its practical effectiveness can be significantly hindered by the black-box nature and the inherent high-dimensionality of the objective landscapes. Existing methods typically mitigate these challenges by restricting the search to a randomly projected low-dimensional subspace. However, the effectiveness and underlying motivation of the projection mechanism remain ambiguous. In this paper, we first empirically demonstrate that despite the prompt space possessing a low-dimensional structure, random projections fail to adequately capture this essential structure. Motivated by this finding, we propose a projection-free prompt search method based on evolutionary strategies. By directly optimizing in the full prompt space with an adaptation mechanism calibrated to the intrinsic dimension, our method achieves competitive search capabilities without additional computational overhead. Furthermore, to bridge the generalization gap in few-shot scenarios, we introduce a confidence-based regularization mechanism that systematically enhances the model's confidence in the target verbalizers. Experimental results on seven natural language understanding tasks from the GLUE benchmark demonstrate that our proposed approach significantly outperforms existing baselines.
翻译:连续提示搜索为自然语言处理任务中的传统参数调优提供了一种计算高效的替代方案。然而,其实际有效性可能因目标函数景观的黑盒特性及固有的高维性而受到显著制约。现有方法通常通过将搜索限制在随机投影的低维子空间中来缓解这些挑战。然而,投影机制的有效性及其根本动机仍不明确。本文首先通过实证表明,尽管提示空间具有低维结构,但随机投影未能充分捕捉这一关键结构。基于此发现,我们提出了一种基于演化策略的无投影提示搜索方法。通过在校准至内在维度的自适应机制下直接在完整提示空间中进行优化,我们的方法在不增加额外计算开销的情况下实现了具有竞争力的搜索能力。此外,为弥合少样本场景中的泛化差距,我们引入了一种基于置信度的正则化机制,该系统性地增强了模型对目标词化器的置信度。在GLUE基准测试的七个自然语言理解任务上的实验结果表明,我们提出的方法显著优于现有基线。