We know that the Nash equilibria of a game cannot be computed efficiently unless $P = PPAD$. But can they be learned? Are there dynamics that (1) can be computed efficiently by the players at each strategy profile and (2) are guaranteed to converge to the Nash equilibria? This is a question as ancient as the Nash equilibrium itself, and antedates by many decades current complexity considerations about it. It was recently proved in MPPS23 that no such dynamics can exist in general; however, the game used in that proof is degenerate, and a strong assumption of uniform convergence to a continuum of Nash equilibria is employed. We point out that both assumptions are necessary for that proof, because Nash-convergent dynamics do exist, which converge to all Nash equilibria in non-degenerate games; in fact we describe two very different families of such dynamics. However, we show that both of these families are intractable to compute locally, unless complexity classes collapse. We formulate a complexity theoretic Impossibility Conjecture: if a locally tractable Nash-convergent dynamic exists then $P=PPAD$. This is a novel kind of conjecture, combining topology and complexity, because the dynamic is only required to be tractable locally -- it could take exponentially many steps to converge, so long as the work done at each step is polynomial. Next, we show that any locally tractable dynamic which possesses a locally tractable Lyapunov function cannot exist unless $PPAD = CLS$. Finally, for the most general impossibility conjecture, we provide a complexity-theoretic explanation why it may be difficult to settle: we introduce a Proving Game to demonstrate that black-box reductions cannot distinguish between convergent and non-convergent dynamics in polynomial time.
翻译:暂无翻译