Prompt engineering is pivotal for harnessing the capabilities of large language models (LLMs) across diverse applications. While existing prompt optimization methods improve prompt effectiveness, they often lead to prompt drifting, where newly generated prompts can adversely impact previously successful cases while addressing failures. Furthermore, these methods tend to rely heavily on LLMs' intrinsic capabilities for prompt optimization tasks. In this paper, we introduce StraGo (Strategic-Guided Optimization), a novel approach designed to mitigate prompt drifting by leveraging insights from both successful and failed cases to identify critical factors for achieving optimization objectives. StraGo employs a how-to-do methodology, integrating in-context learning to formulate specific, actionable strategies that provide detailed, step-by-step guidance for prompt optimization. Extensive experiments conducted across a range of tasks, including reasoning, natural language understanding, domain-specific knowledge, and industrial applications, demonstrate StraGo's superior performance. It establishes a new state-of-the-art in prompt optimization, showcasing its ability to deliver stable and effective prompt improvements.
翻译:提示工程对于释放大型语言模型(LLM)在各类应用中的能力至关重要。尽管现有的提示优化方法提升了提示的有效性,但它们常常导致提示漂移现象,即新生成的提示在解决失败案例的同时,可能对先前成功的案例产生不利影响。此外,这些方法往往过度依赖LLM自身的内在能力来完成提示优化任务。本文提出StraGo(战略引导优化),这是一种新颖的方法,旨在通过综合利用成功与失败案例的洞见来识别实现优化目标的关键因素,从而缓解提示漂移问题。StraGo采用“如何执行”的方法论,结合上下文学习来制定具体、可操作的策略,为提示优化提供详细、循序渐进的指导。在包括推理、自然语言理解、领域特定知识及工业应用在内的一系列任务上进行的大量实验表明,StraGo具有卓越的性能。它在提示优化领域确立了新的最先进水平,展现了其提供稳定且有效提示改进的能力。