Prompt engineering is crucial for fully leveraging large language models (LLMs), yet most existing optimization methods follow a single trajectory, resulting in limited adaptability, gradient conflicts, and high computational overhead. We propose MAPGD (Multi-Agent Prompt Gradient Descent), a novel framework that reconceptualizes prompt optimization as a collaborative process among specialized agents. Each agent focuses on a distinct refinement dimension, such as instruction clarity, example selection, format structure, or stylistic adaptation, and their contributions are coordinated through semantic gradient embedding, conflict detection, and fusion. To further enhance robustness and stability, MAPGD introduces two new mechanisms: Hypersphere Constrained Gradient Clustering (HCGC), which enforces angular margin constraints for compact and well-separated clusters, and Channel Adaptive Agent Weighting (CAAW), which dynamically reweights agent contributions based on validation performance. Experiments on classification and reasoning benchmarks show that MAPGD consistently surpasses single-agent and random baselines in both accuracy and efficiency. Ablation studies confirm the effectiveness of gradient fusion, agent specialization, and conflict resolution. Together, these components establish MAPGD as a unified, gradient-based, and interpretable framework for robust prompt optimization with theoretical convergence guarantees.
翻译:提示工程对于充分发挥大型语言模型(LLM)的潜力至关重要,然而现有优化方法大多遵循单一轨迹,导致适应性有限、梯度冲突和计算开销高昂。本文提出MAPGD(多智能体提示梯度下降),这是一种新颖的框架,将提示优化重新构想为多个专业化智能体之间的协作过程。每个智能体专注于不同的优化维度,例如指令清晰度、示例选择、格式结构或风格适配,并通过语义梯度嵌入、冲突检测与融合机制协调其贡献。为进一步增强鲁棒性与稳定性,MAPGD引入了两种新机制:超球面约束梯度聚类(HCGC),通过施加角度间隔约束以形成紧凑且分离良好的聚类;以及通道自适应智能体加权(CAAW),根据验证性能动态调整智能体贡献的权重。在分类与推理基准测试上的实验表明,MAPGD在准确性和效率上均持续超越单智能体及随机基线方法。消融研究证实了梯度融合、智能体专业化与冲突解决机制的有效性。这些组件共同使MAPGD成为一个具有理论收敛保证的统一、基于梯度且可解释的鲁棒提示优化框架。