Multi-agent reinforcement learning is an area of rapid advancement in artificial intelligence and machine learning. One of the important questions to be answered is how to conduct credit assignment in a multi-agent system. There have been many schemes designed to conduct credit assignment by multi-agent reinforcement learning algorithms. Although these credit assignment schemes have been proved useful in improving the performance of multi-agent reinforcement learning, most of them are designed heuristically without a rigorous theoretic basis and therefore infeasible to understand how agents cooperate. In this thesis, we aim at investigating the foundation of credit assignment in multi-agent reinforcement learning via cooperative game theory. We first extend a game model called convex game and a payoff distribution scheme called Shapley value in cooperative game theory to Markov decision process, named as Markov convex game and Markov Shapley value respectively. We represent a global reward game as a Markov convex game under the grand coalition. As a result, Markov Shapley value can be reasonably used as a credit assignment scheme in the global reward game. Markov Shapley value possesses the following virtues: (i) efficiency; (ii) identifiability of dummy agents; (iii) reflecting the contribution and (iv) symmetry, which form the fair credit assignment. Based on Markov Shapley value, we propose three multi-agent reinforcement learning algorithms called SHAQ, SQDDPG and SMFPPO. Furthermore, we extend Markov convex game to partial observability to deal with the partially observable problems, named as partially observable Markov convex game. In application, we evaluate SQDDPG and SMFPPO on the real-world problem in energy networks.
翻译:多智能体强化学习是人工智能与机器学习领域的快速进展方向之一。需解决的关键问题之一是如何在多智能体系统中进行信用分配。目前已设计出多种通过多智能体强化学习算法实现信用分配的方案。尽管这些信用分配方案已被证明有助于提升多智能体强化学习的性能,但大多数方案的设计依赖于启发式方法,缺乏严格的理论基础,因此难以理解智能体间的协作机制。本文旨在通过合作博弈论探究多智能体强化学习中信用分配的理论基础。首先,我们将合作博弈论中称为凸博弈的博弈模型和称为沙普利值的收益分配方案扩展到马尔可夫决策过程,分别命名为马尔可夫凸博弈和马尔可夫沙普利值。我们将全局奖励博弈表示为大联盟下的马尔可夫凸博弈。由此,马尔可夫沙普利值可合理用作全局奖励博弈中的信用分配方案。马尔可夫沙普利值具有以下特性:(i)效率性;(ii)虚拟智能体的可辨识性;(iii)反映贡献度;(iv)对称性,这些特性构成了公平信用分配的基础。基于马尔可夫沙普利值,我们提出了三种多智能体强化学习算法:SHAQ、SQDDPG和SMFPPO。此外,我们将马尔可夫凸博弈扩展到部分可观测场景以处理部分可观测问题,称为部分可观测马尔可夫凸博弈。在应用层面,我们在能源网络的真实场景中评估了SQDDPG和SMFPPO的性能。