Mean Field Game (MFG) is a framework utilized to model and approximate the behavior of a large number of agents, and the computation of equilibria in MFG has been a subject of interest. Despite the proposal of methods to approximate the equilibria, algorithms where the sequence of updated policy converges to equilibrium, specifically those exhibiting last-iterate convergence, have been limited. We propose the use of a simple, proximal-point-type algorithm to compute equilibria for MFGs. Subsequently, we provide the first last-iterate convergence guarantee under the Lasry--Lions-type monotonicity condition. We further employ the Mirror Descent algorithm for the regularized MFG to efficiently approximate the update rules of the proximal point method for MFGs. We demonstrate that the algorithm can approximate with an accuracy of $\varepsilon$ after $\mathcal{O}({\log(1/\varepsilon)})$ iterations. This research offers a tractable approach for large-scale and large-population games.
翻译:平均场博弈(MFG)是一种用于建模和近似大量智能体行为的框架,其均衡计算一直备受关注。尽管已有多种近似均衡的方法被提出,但能够使策略更新序列收敛至均衡的算法——特别是展现末次迭代收敛性的算法——仍然有限。本文提出使用一种简单的近端点型算法来计算MFG的均衡。随后,我们在Lasry-Lions型单调性条件下首次给出了末次迭代收敛性保证。进一步地,我们采用镜像下降算法处理正则化MFG,以高效近似MFG近端点方法的更新规则。我们证明该算法在$\mathcal{O}({\log(1/\varepsilon)})$次迭代后能达到$\varepsilon$精度的近似。本研究为大规模与大群体博弈提供了一种可处理的求解途径。