Traditional supervised learning aims to learn an unknown mapping by fitting a function to a set of input-output pairs with a fixed dimension. The fitted function is then defined on inputs of the same dimension. However, in many settings, the unknown mapping takes inputs in any dimension; examples include graph parameters defined on graphs of any size and physics quantities defined on an arbitrary number of particles. We leverage a newly-discovered phenomenon in algebraic topology, called representation stability, to define equivariant neural networks that can be trained with data in a fixed dimension and then extended to accept inputs in any dimension. Our approach is user-friendly, requiring only the network architecture and the groups for equivariance, and can be combined with any training procedure. We provide a simple open-source implementation of our methods and offer preliminary numerical experiments.
翻译:传统监督学习旨在通过将函数拟合到固定维度的输入-输出对上来学习未知映射,拟合后的函数随后定义在相同维度的输入上。然而,在许多场景中,未知映射的输入维度可以是任意值——例如定义在任意大小图上的图参数,以及定义在任意数量粒子上的物理量。我们利用代数拓扑中一种新发现的"表示稳定性"现象,构建了等变神经网络,使其能用固定维度的数据进行训练,而后扩展为可接收任意维度输入。该方法用户友好,仅需指定网络架构与等变群,并能与任意训练流程结合使用。我们提供了所提方法的简易开源实现,并开展了初步数值实验。