Multi-agent systems have evolved into practical LLM-driven collaborators for many applications, gaining robustness from diversity and cross-checking. However, multi-agent RL (MARL) training is resource-intensive and unstable: co-adapting teammates induce non-stationarity, and rewards are often sparse and high-variance. Therefore, we introduce \textbf{Multi-Agent Test-Time Reinforcement Learning (MATTRL)}, a framework that injects structured textual experience into multi-agent deliberation at inference time. MATTRL forms a multi-expert team of specialists for multi-turn discussions, retrieves and integrates test-time experiences, and reaches consensus for final decision-making. We also study credit assignment for constructing a turn-level experience pool, then reinjecting it into the dialogue. Across challenging benchmarks in medicine, math, and education, MATTRL improves accuracy by an average of 3.67\% over a multi-agent baseline, and by 8.67\% over comparable single-agent baselines. Ablation studies examine different credit-assignment schemes and provide a detailed comparison of how they affect training outcomes. MATTRL offers a stable, effective and efficient path to distribution-shift-robust multi-agent reasoning without tuning.
翻译:多智能体系统已演变为众多应用中的实用LLM驱动协作体,其鲁棒性源于多样性与交叉验证。然而,多智能体强化学习(MARL)训练资源消耗大且不稳定:队友间的协同适应会引发非平稳性,且奖励通常稀疏且具有高方差。为此,我们提出**多智能体测试时强化学习(MATTRL)**框架,该框架在推理阶段将结构化文本经验注入多智能体审议过程。MATTRL构建了一个面向多轮讨论的多专家专业团队,检索并整合测试时经验,并通过共识机制达成最终决策。我们还研究了用于构建轮次级经验池的信用分配方法,进而将其重新注入对话流程。在医学、数学和教育领域的多个挑战性基准测试中,MATTRL相较于多智能体基线平均准确率提升3.67%,较可比单智能体基线提升8.67%。消融实验检验了不同信用分配方案,并详细比较了它们对训练结果的影响。MATTRL为无需调参的、分布偏移鲁棒的多智能体推理提供了一条稳定、高效且经济的路径。