The deployment of ever-larger machine learning models reflects a growing consensus that the more expressive the model$\unicode{x2013}$and the more data one has access to$\unicode{x2013}$the more one can improve performance. As models get deployed in a variety of real world scenarios, they inevitably face strategic environments. In this work, we consider the natural question of how the interplay of models and strategic interactions affects scaling laws. We find that strategic interactions can break the conventional view of scaling laws$\unicode{x2013}$meaning that performance does not necessarily monotonically improve as models get larger and/ or more expressive (even with infinite data). We show the implications of this phenomenon in several contexts including strategic regression, strategic classification, and multi-agent reinforcement learning through examples of strategic environments in which$\unicode{x2013}$by simply restricting the expressivity of one's model or policy class$\unicode{x2013}$one can achieve strictly better equilibrium outcomes. Motivated by these examples, we then propose a new paradigm for model-selection in games wherein an agent seeks to choose amongst different model classes to use as their action set in a game.
翻译:日益庞大的机器学习模型的部署反映了一个日益增长的共识:模型表达能力越强,可获取的数据越多,就能更大程度地提升性能。随着模型被部署到各种现实场景中,它们不可避免地要面对策略性环境。在本工作中,我们考虑模型与策略性互动如何共同影响规模法则这一自然问题。我们发现策略性互动可能颠覆传统规模法则的观点——这意味着即使拥有无限数据,性能也不一定会随着模型规模增大和/或表达能力增强而单调提升。我们通过多个策略环境示例(包括策略回归、策略分类和多智能体强化学习)展示了这一现象的影响——在这些示例中,仅通过限制模型或策略类别的表达能力,就能获得严格更优的均衡结果。受这些示例启发,我们进一步提出了一种博弈中模型选择的新范式:智能体需要在不同模型类别中进行选择,将其作为博弈中的行动集。