To combat the prohibitive communication costs of ``free-for-all" multi-agent systems (MAS), we introduce \textbf{Agent-GSPO}, a framework that directly optimizes for token economy using sequence-level reinforcement learning. Agent-GSPO leverages the stable and memory-efficient Group Sequence Policy Optimization (GSPO) algorithm to train agents on a communication-aware reward that explicitly penalizes verbosity. Across seven reasoning benchmarks, Agent-GSPO not only achieves new state-of-the-art performance but does so with a fraction of the token consumption of existing methods. By fostering emergent strategies like ``strategic silence," our approach provides a practical blueprint for developing scalable and economically viable multi-agent systems.
翻译:为应对“自由交互”式多智能体系统(MAS)中高昂的通信开销,本文提出\textbf{Agent-GSPO}框架,该框架利用序列级强化学习直接优化令牌使用效率。Agent-GSPO采用稳定且内存高效的群组序列策略优化(GSPO)算法,基于通信感知奖励函数训练智能体,该奖励函数显式惩罚冗余通信。在七项推理基准测试中,Agent-GSPO不仅取得了最先进的性能表现,且令牌消耗量仅为现有方法的零头。通过催生“策略性静默”等涌现策略,本方法为开发可扩展且经济可行的多智能体系统提供了实用蓝图。