The deployment of ever-larger machine learning models reflects a growing consensus that the more expressive the model class one optimizes over$\unicode{x2013}$and the more data one has access to$\unicode{x2013}$the more one can improve performance. As models get deployed in a variety of real-world scenarios, they inevitably face strategic environments. In this work, we consider the natural question of how the interplay of models and strategic interactions affects the relationship between performance at equilibrium and the expressivity of model classes. We find that strategic interactions can break the conventional view$\unicode{x2013}$meaning that performance does not necessarily monotonically improve as model classes get larger or more expressive (even with infinite data). We show the implications of this result in several contexts including strategic regression, strategic classification, and multi-agent reinforcement learning. In particular, we show that each of these settings admits a Braess' paradox-like phenomenon in which optimizing over less expressive model classes allows one to achieve strictly better equilibrium outcomes. Motivated by these examples, we then propose a new paradigm for model selection in games wherein an agent seeks to choose amongst different model classes to use as their action set in a game.
翻译:部署日益庞大的机器学习模型反映了一种日益增长的共识:优化所采用的模型类别表达能力越强,且可获取的数据越多,性能提升潜力就越大。随着模型被部署于各种现实场景,它们不可避免地面临战略环境。在本工作中,我们探讨了一个自然产生的问题:模型与战略交互的相互作用如何影响均衡性能与模型类别表达能力之间的关系。我们发现战略交互可能打破传统观点——这意味着即使拥有无限数据,随着模型类别规模扩大或表达能力增强,性能未必会单调提升。我们通过在战略回归、战略分类和多智能体强化学习等多个情境中展示这一结论的启示。具体而言,我们证明这些场景均存在类似布雷斯悖论的现象:在表达能力较弱的模型类别上进行优化,反而能获得严格更优的均衡结果。受这些例证启发,我们进一步提出一种博弈中模型选择的新范式,其中智能体需在博弈中选择不同模型类别作为其行动集。