How do neural networks trained over sequences acquire the ability to perform structured operations, such as arithmetic, geometric, and algorithmic computation? To gain insight into this question, we introduce the sequential group composition task. In this task, networks receive a sequence of elements from a finite group encoded in a real vector space and must predict their cumulative product. The task can be order-sensitive and requires a nonlinear architecture to be learned. Our analysis isolates the roles of the group structure, encoding statistics, and sequence length in shaping learning. We prove that two-layer networks learn this task one irreducible representation of the group at a time in an order determined by the Fourier statistics of the encoding. These networks can perfectly learn the task, but doing so requires a hidden width exponential in the sequence length $k$. In contrast, we show how deeper models exploit the associativity of the task to dramatically improve this scaling: recurrent neural networks compose elements sequentially in $k$ steps, while multilayer networks compose adjacent pairs in parallel in $\log k$ layers. Overall, the sequential group composition task offers a tractable window into the mechanics of deep learning.
翻译:神经网络如何通过序列训练获得执行结构化运算(如算术、几何和算法计算)的能力?为深入探究这一问题,我们提出了序列群复合任务。在此任务中,网络接收以实数向量空间编码的有限群元素序列,并须预测其累积乘积。该任务可具有顺序敏感性,且需通过非线性架构进行学习。我们的分析分离了群结构、编码统计特性及序列长度在塑造学习过程中的作用。我们证明,双层网络按编码的傅里叶统计特性所确定的顺序,逐次学习该群的一个不可约表示。这些网络可以完美学习该任务,但需要隐藏层宽度随序列长度 $k$ 呈指数增长。相比之下,我们展示了更深层模型如何利用任务的可结合性显著改善这一缩放关系:循环神经网络以 $k$ 步顺序复合元素,而多层网络通过 $\log k$ 层并行复合相邻元素对。总体而言,序列群复合任务为探究深度学习机制提供了一个可处理的观察窗口。