We study model-free policy learning for discrete-time mean-field control (MFC) problems with finite state space and compact action space. In contrast to the extensive literature on value-based methods for MFC, policy-based approaches remain largely unexplored due to the intrinsic dependence of transition kernels and rewards on the evolving population state distribution, which prevents the direct use of likelihood-ratio estimators of policy gradients from classical single-agent reinforcement learning. We introduce a novel perturbation scheme on the state-distribution flow and prove that the gradient of the resulting perturbed value function converges to the true policy gradient as the perturbation magnitude vanishes. This construction yields a fully model-free estimator based solely on simulated trajectories and an auxiliary estimate of the sensitivity of the state distribution. Building on this framework, we develop MF-REINFORCE, a model-free policy gradient algorithm for MFC, and establish explicit quantitative bounds on its bias and mean-squared error. Numerical experiments on representative mean-field control tasks demonstrate the effectiveness of the proposed approach.
翻译:本文研究了具有有限状态空间和紧致动作空间的离散时间平均场控制(MFC)问题的无模型策略学习方法。与大量基于价值函数的MFC方法文献相比,由于转移核和奖励函数对演化的群体状态分布存在内在依赖性,阻碍了经典单智能体强化学习中策略梯度似然比估计量的直接应用,基于策略的方法在很大程度上尚未得到探索。我们引入了一种新颖的状态分布流扰动方案,并证明当扰动幅度趋近于零时,所得扰动值函数的梯度收敛于真实策略梯度。这一构造产生了一个完全无模型的估计量,该估计量仅基于模拟轨迹和状态分布敏感度的辅助估计。基于此框架,我们开发了MF-REINFORCE——一种用于MFC的无模型策略梯度算法,并对其偏差和均方误差建立了明确的定量界限。在代表性平均场控制任务上的数值实验验证了所提方法的有效性。