We propose a novel independent and payoff-based learning framework for stochastic games that is model-free, game-agnostic, and gradient-free. The learning dynamics follow a best-response-type actor-critic architecture, where agents update their strategies (actors) using feedback from two distinct critics: a fast critic that intuitively responds to observed payoffs under limited information, and a slow critic that deliberatively approximates the solution to the underlying dynamic programming problem. Crucially, the learning process relies on non-equilibrium adaptation through smoothed best responses to observed payoffs. We establish convergence to (approximate) equilibria in two-agent zero-sum and multi-agent identical-interest stochastic games over an infinite horizon. This provides one of the first payoff-based and fully decentralized learning algorithms with theoretical guarantees in both settings. Empirical results further validate the robustness and effectiveness of the proposed approach across both classes of games.
翻译:我们提出了一种新颖的独立且基于收益的随机博弈学习框架,该框架无需模型、与博弈类型无关且无需梯度。学习动态遵循一种最佳响应类型的行动者-评论家架构,其中智能体使用来自两个不同评论家的反馈来更新其策略(行动者):一个快速评论家,它在有限信息下直观地对观察到的收益做出响应;一个慢速评论家,它审慎地逼近底层动态规划问题的解。关键在于,学习过程依赖于通过针对观察到的收益的平滑最佳响应进行的非均衡适应。我们在无限时域上,针对双智能体零和博弈与多智能体共同利益随机博弈,建立了向(近似)均衡的收敛性。这为两种场景提供了首批具有理论保证的、基于收益且完全去中心化的学习算法之一。实证结果进一步验证了所提方法在两类博弈中的鲁棒性和有效性。