Mean field games (MFGs) offer a powerful framework for modeling large-scale multi-agent systems. This paper addresses MFGs formulated in continuous time with discrete state spaces, where agents' dynamics are governed by continuous-time Markov chains -- relevant to applications like population dynamics and queueing networks. While prior research has largely focused on theoretical aspects of continuous-time discrete-state MFGs, efficient computational methods for determining equilibria remain underdeveloped. Inspired by discrete-time approaches, we approximate the classical Nash equilibria by regularization methods, enabling more computationally tractable solution algorithms. Specifically, we define regularized equilibria for continuous-time MFGs and extend the classical fixed-point iteration and fictitious play algorithm to these equilibria. We validate the effectiveness and practicality of our approach via illustrative numerical examples.
翻译:平均场博弈(MFGs)为建模大规模多智能体系统提供了强大框架。本文研究在连续时间下构建、具有离散状态空间的平均场博弈问题,其中智能体的动态由连续时间马尔可夫链控制——这类模型适用于种群动力学和排队网络等应用场景。尽管现有研究主要集中于连续时间离散状态平均场博弈的理论层面,但用于确定均衡态的高效计算方法仍显不足。受离散时间方法的启发,我们通过正则化方法逼近经典纳什均衡,从而构建计算上更易处理的求解算法。具体而言,我们定义了连续时间平均场博弈的正则化均衡,并将经典定点迭代算法与虚拟博弈算法扩展至该均衡框架。最后,我们通过示例性数值实验验证了所提方法的有效性与实用性。