We study two-player zero-sum concurrent stochastic games with finite state and action space played for an infinite number of steps. In every step, the two players simultaneously and independently choose an action. Given the current state and the chosen actions, the next state is obtained according to a stochastic transition function. An objective is a measurable function on plays (or infinite trajectories) of the game, and the value for an objective is the maximal expectation that the player can guarantee against the adversarial player. We consider: (a) stateful-discounted objectives, which are similar to the classical discounted-sum objectives, but states are associated with different discount factors rather than a single discount factor; and (b) parity objectives, which are a canonical representation for $\omega$-regular objectives. For stateful-discounted objectives, given an ordering of the discount factors, the limit value is the limit of the value of the stateful-discounted objectives, as the discount factors approach zero according to the given order. The computational problem we consider is the approximation of the value within an arbitrary additive error. The above problem is known to be in EXPSPACE for the limit value of stateful-discounted objectives and in PSPACE for parity objectives. The best-known algorithms for both the above problems are at least exponential time, with an exponential dependence on the number of states and actions. Our main results for the value approximation problem for the limit value of stateful-discounted objectives and parity objectives are as follows: (a) we establish TFNP[NP] complexity; and (b) we present algorithms that improve the dependency on the number of actions in the exponent from linear to logarithmic. In particular, if the number of states is constant, our algorithms run in polynomial time.
翻译:我们研究具有有限状态与动作空间的两人零和并发随机博弈,该博弈进行无限步。在每一步中,两名玩家同时且独立地选择一个动作。根据当前状态及所选动作,下一状态依照随机转移函数确定。目标为博弈过程(或无限轨迹)上的可测函数,目标值为玩家在对抗性对手下可保证的最大期望。我们考虑:(a)状态折扣目标,其类似于经典折扣和目标,但各状态关联不同的折扣因子而非单一折扣因子;以及(b)奇偶目标,其为 $\omega$-正则目标的规范表示。对于状态折扣目标,给定折扣因子的排序,极限值即为状态折扣目标值在折扣因子按给定顺序趋近于零时的极限。我们考虑的计算问题是在任意加性误差内逼近目标值。已知上述问题对于状态折扣目标的极限值属于 EXPSPACE,对于奇偶目标属于 PSPACE。针对这两个问题的最优已知算法至少具有指数时间复杂度,且指数部分依赖于状态与动作的数量。我们关于状态折扣目标极限值与奇偶目标值逼近问题的主要结果如下:(a)我们建立了 TFNP[NP] 复杂度类别;以及(b)我们提出了改进算法,将指数中对动作数量的依赖从线性降低至对数级。特别地,若状态数量为常数,我们的算法可在多项式时间内运行。