Independent learners are agents that employ single-agent algorithms in multi-agent systems, intentionally ignoring the effect of other strategic agents. This paper studies mean-field games from a decentralized learning perspective, with two primary objectives: (i) to identify structure that can guide algorithm design, and (ii) to understand the emergent behaviour in systems of independent learners. We study a new model of partially observed mean-field games with finitely many players, local action observability, and a general observation channel for partial observations of the global state. Specific observation channels considered include (a) global observability, (b) local and mean-field observability, (c) local and compressed mean-field observability, and (d) only local observability. We establish conditions under which the control problem of a given agent is equivalent to a fully observed MDP, as well as conditions under which the control problem is equivalent only to a POMDP. Building on the connection to MDPs, we prove the existence of perfect equilibrium among memoryless stationary policies under mean-field observability. Leveraging the connection to POMDPs, we prove convergence of learning iterates obtained by independent learning agents under any of the aforementioned observation channels. We interpret the limiting values as subjective value functions, which an agent believes to be relevant to its control problem. These subjective value functions are then used to propose subjective Q-equilibrium, a new solution concept for partially observed n-player mean-field games, whose existence is proved under mean-field or global observability. We provide a decentralized learning algorithm for partially observed n-player mean-field games, and we show that it drives play to subjective Q-equilibrium by adapting the recently developed theory of satisficing paths to allow for subjectivity.
翻译:独立学习者是在多智能体系统中采用单智能体算法的智能体,其有意忽略其他策略性智能体的影响。本文从去中心化学习的视角研究平均场博弈,主要目标有二:(i) 识别能够指导算法设计的结构特征,(ii) 理解独立学习者系统中涌现的行为。我们研究一种新的部分可观测平均场博弈模型,该模型具有有限参与者、局部动作可观测性,以及用于全局状态部分观测的通用观测通道。具体考虑的观测通道包括:(a) 全局可观测性,(b) 局部与平均场可观测性,(c) 局部与压缩平均场可观测性,以及(d) 仅局部可观测性。我们建立了特定条件下,给定智能体的控制问题等价于一个完全可观测马尔可夫决策过程,以及另一些条件下该控制问题仅等价于一个部分可观测马尔可夫决策过程的条件。基于与马尔可夫决策过程的关联性,我们证明了在平均场可观测性下,无记忆平稳策略中存在完美均衡。利用与部分可观测马尔可夫决策过程的关联性,我们证明了在上述任一观测通道下,独立学习智能体获得的学习迭代的收敛性。我们将极限值解释为主观价值函数,即智能体认为与其控制问题相关的函数。这些主观价值函数随后被用于提出主观Q均衡——一种针对部分可观测n人平均场博弈的新解概念,其存在性在平均场或全局可观测性条件下得到证明。我们为部分可观测n人平均场博弈提供了一种去中心化学习算法,并通过将近期发展的满意路径理论加以调整以容纳主观性,证明了该算法能够驱使博弈行为趋向主观Q均衡。