Credit assignment is a core challenge in multi-agent reinforcement learning (MARL), especially in large-scale systems with structured, local interactions. Graph-based Markov decision processes (GMDPs) capture such settings via an influence graph, but standard critics are poorly aligned with this structure: global value functions provide weak per-agent learning signals, while existing local constructions can be difficult to estimate and ill-behaved in infinite-horizon settings. We introduce the Diffusion Value Function (DVF), a factored value function for GMDPs that assigns to each agent a value component by diffusing rewards over the influence graph with temporal discounting and spatial attenuation. We show that DVF is well-defined, admits a Bellman fixed point, and decomposes the global discounted value via an averaging property. DVF can be used as a drop-in critic in standard RL algorithms and estimated scalably with graph neural networks. Building on DVF, we propose Diffusion A2C (DA2C) and a sparse message-passing actor, Learned DropEdge GNN (LD-GNN), for learning decentralised algorithms under communication costs. Across the firefighting benchmark and three distributed computation tasks (vector graph colouring and two transmit power optimisation problems), DA2C consistently outperforms local and global critic baselines, improving average reward by up to 11%.
翻译:信用分配是多智能体强化学习(MARL)中的一个核心挑战,尤其是在具有结构化局部交互的大规模系统中。基于图的马尔可夫决策过程(GMDP)通过影响图来刻画此类场景,但标准的评论家函数与此结构并不匹配:全局价值函数为每个智能体提供的学习信号较弱,而现有的局部构造在无限时域设置中可能难以估计且行为不佳。我们提出了扩散价值函数(DVF),这是一种用于GMDP的分解价值函数,它通过将奖励在影响图上进行时间折扣和空间衰减的扩散,为每个智能体分配一个价值分量。我们证明了DVF是良定义的,允许贝尔曼不动点,并通过一个平均性质分解了全局折扣价值。DVF可以作为标准RL算法中的即插即用评论家,并可利用图神经网络进行可扩展的估计。基于DVF,我们提出了扩散A2C(DA2C)和一个稀疏消息传递执行器——学习型DropEdge图神经网络(LD-GNN),用于在通信成本下学习去中心化算法。在消防基准测试和三个分布式计算任务(向量图着色和两个发射功率优化问题)中,DA2C始终优于局部和全局评论家基线,将平均奖励提升了高达11%。