We introduce Meta Prompting (MP), a framework that elevates the reasoning capabilities of large language models (LLMs) by focusing on the formal structure of a task rather than content-specific examples. We establish a theoretical foundation for this paradigm, formalizing MP as a functor that maps a category of tasks to a category of structured prompts, thereby guaranteeing that compositional problem-solving strategies can be systematically decomposed into modular prompt structures. We extend this concept to Recursive Meta Prompting (RMP), an automated process where an LLM can generate and refine its own prompts. We model this self-improvement loop formally as a monad, providing a principled framework for automated prompt engineering. Our claims are validated through extensive experiments demonstrating that a Qwen-72B base model, guided by a single, example-agnostic meta-prompt, achieves state-of-the-art results on MATH, GSM8K, and Game of 24. These results are achieved with substantial token efficiency gains over traditional few-shot methods. Project Page: https://github.com/meta-prompting/meta-prompting.
翻译:本文提出元提示(Meta Prompting,MP)框架,该框架通过关注任务的形式化结构而非具体内容示例,显著提升大语言模型(LLMs)的推理能力。我们为此范式建立了理论基础,将MP形式化为一个函子,该函子将任务范畴映射至结构化提示范畴,从而保证组合式问题求解策略能够被系统性地分解为模块化提示结构。我们进一步将此概念扩展为递归元提示(Recursive Meta Prompting,RMP),即大语言模型能够自动生成并优化自身提示的自动化过程。我们将此自我改进循环形式化建模为一个单子,为自动化提示工程提供了原则性框架。通过大量实验验证了所提方法的有效性:实验表明,在单一且与示例无关的元提示引导下,Qwen-72B基础模型在MATH、GSM8K和Game of 24基准测试中取得了最先进的性能,同时相较于传统的少样本方法实现了显著的token效率提升。项目页面:https://github.com/meta-prompting/meta-prompting。