Multi-Agent Reinforcement Learning involves agents that learn together in a shared environment, leading to emergent dynamics sensitive to initial conditions and parameter variations. A Dynamical Systems approach, which studies the evolution of multi-component systems over time, has uncovered some of the underlying dynamics by constructing deterministic approximation models of stochastic algorithms. In this work, we demonstrate that even in the simplest case of independent Q-learning with a Boltzmann exploration policy, significant discrepancies arise between the actual algorithm and previous approximations. We elaborate why these models actually approximate interesting variants rather than the original incremental algorithm. To explain the discrepancies, we introduce a new discrete-time approximation model that explicitly accounts for agents' update frequencies within the learning process and show that its dynamics fundamentally differ from the simplified dynamics of prior models. We illustrate the usefulness of our approach by applying it to the question of spontaneous cooperation in social dilemmas, specifically the Prisoner's Dilemma as the simplest case study. We identify conditions under which the learning behaviour appears as long-term stable cooperation from an external perspective. However, our model shows that this behaviour is merely a metastable transient phase and not a true equilibrium, making it exploitable. We further exemplify how specific parameter settings can significantly exacerbate the moving target problem in independent learning. Through a systematic analysis of our model, we show that increasing the discount factor induces oscillations, preventing convergence to a joint policy. These oscillations arise from a supercritical Neimark-Sacker bifurcation, which transforms the unique stable fixed point into an unstable focus surrounded by a stable limit cycle.
翻译:多智能体强化学习涉及智能体在共享环境中共同学习,导致对初始条件和参数变化敏感的新兴动态。通过构建随机算法的确定性近似模型,动力学系统方法(研究多组件系统随时间演化的学科)已揭示了一些底层动态。在本研究中,我们证明即使在采用玻尔兹曼探索策略的独立Q学习这一最简单情况下,实际算法与先前近似模型之间仍存在显著差异。我们详细阐释了这些模型实际上近似的是有趣的变体而非原始增量式算法。为解释这些差异,我们引入了一个新的离散时间近似模型,该模型明确考虑了智能体在学习过程中的更新频率,并证明其动力学特性与先前模型的简化动力学存在本质区别。我们通过将该方法应用于社会困境(特别是作为最简单案例研究的囚徒困境)中的自发合作问题,展示了本方法的实用性。我们确定了从外部视角看学习行为表现为长期稳定合作的条件。然而,我们的模型表明该行为仅是亚稳态的瞬态阶段而非真实平衡,因此具有可被利用性。我们进一步例证了特定参数设置如何显著加剧独立学习中的移动目标问题。通过对模型的系统分析,我们证明增加折扣因子会引发振荡,阻碍联合策略的收敛。这些振荡源于超临界奈马克-萨克尔分岔,该分岔将唯一的稳定不动点转化为被稳定极限环包围的不稳定焦点。