Gradient-free prompt optimization methods have made significant strides in enhancing the performance of closed-source Large Language Models (LLMs) across a wide range of tasks. However, existing approaches make light of the importance of high-quality prompt initialization and the identification of effective optimization directions, thus resulting in substantial optimization steps to obtain satisfactory performance. In this light, we aim to accelerate prompt optimization process to tackle the challenge of low convergence rate. We propose a dual-phase approach which starts with generating high-quality initial prompts by adopting a well-designed meta-instruction to delve into task-specific information, and iteratively optimize the prompts at the sentence level, leveraging previous tuning experience to expand prompt candidates and accept effective ones. Extensive experiments on eight datasets demonstrate the effectiveness of our proposed method, achieving a consistent accuracy gain over baselines with less than five optimization steps.
翻译:无梯度提示优化方法在提升闭源大语言模型(LLM)于广泛任务上的性能方面取得了显著进展。然而,现有方法忽视了高质量提示初始化的重要性以及有效优化方向的识别,从而导致需要大量优化步骤才能获得令人满意的性能。鉴于此,我们旨在加速提示优化过程以应对收敛速度慢的挑战。我们提出了一种双阶段方法:首先通过采用精心设计的元指令深入挖掘任务特定信息以生成高质量的初始提示,然后在句子级别迭代优化提示,利用先前的调优经验来扩展提示候选集并采纳有效提示。在八个数据集上的大量实验证明了我们提出的方法的有效性,其在少于五个优化步骤的情况下,相比基线模型实现了持续一致的准确率提升。