We characterize offline data poisoning attacks on Multi-Agent Reinforcement Learning (MARL), where an attacker may change a data set in an attempt to install a (potentially fictitious) unique Markov-perfect Nash equilibrium for a two-player zero-sum Markov game. We propose the unique Nash set, namely the set of games, specified by their Q functions, with a specific joint policy being the unique Nash equilibrium. The unique Nash set is central to poisoning attacks because the attack is successful if and only if data poisoning pushes all plausible games inside the set. The unique Nash set generalizes the reward polytope commonly used in inverse reinforcement learning to MARL. For zero-sum Markov games, both the inverse Nash set and the set of plausible games induced by data are polytopes in the Q function space. We exhibit a linear program to efficiently compute the optimal poisoning attack. Our work sheds light on the structure of data poisoning attacks on offline MARL, a necessary step before one can design more robust MARL algorithms.
翻译:本文刻画了针对多智能体强化学习(MARL)的离线数据投毒攻击,攻击者可通过篡改数据集,试图在两人零和马尔可夫博弈中植入一个(可能虚构的)唯一马尔可夫完美纳什均衡。我们提出了唯一纳什集的概念,即一组由Q函数指定的博弈,其具有特定的联合策略作为唯一的纳什均衡。唯一纳什集是投毒攻击的核心,因为攻击成功当且仅当数据投毒将所有可能的博弈推入该集合。唯一纳什集将逆强化学习中常用的奖励多面体推广至MARL场景。对于零和马尔可夫博弈,逆纳什集与数据导出的可能博弈集合在Q函数空间中均构成多面体。我们提出了一种可高效计算最优投毒攻击的线性规划方法。本研究揭示了离线MARL数据投毒攻击的结构特征,为设计更具鲁棒性的MARL算法提供了必要的理论基础。