One of the central questions in the theory of deep learning is to understand how neural networks learn hierarchical features. The ability of deep networks to extract salient features is crucial to both their outstanding generalization ability and the modern deep learning paradigm of pretraining and finetuneing. However, this feature learning process remains poorly understood from a theoretical perspective, with existing analyses largely restricted to two-layer networks. In this work we show that three-layer neural networks have provably richer feature learning capabilities than two-layer networks. We analyze the features learned by a three-layer network trained with layer-wise gradient descent, and present a general purpose theorem which upper bounds the sample complexity and width needed to achieve low test error when the target has specific hierarchical structure. We instantiate our framework in specific statistical learning settings -- single-index models and functions of quadratic features -- and show that in the latter setting three-layer networks obtain a sample complexity improvement over all existing guarantees for two-layer networks. Crucially, this sample complexity improvement relies on the ability of three-layer networks to efficiently learn nonlinear features. We then establish a concrete optimization-based depth separation by constructing a function which is efficiently learnable via gradient descent on a three-layer network, yet cannot be learned efficiently by a two-layer network. Our work makes progress towards understanding the provable benefit of three-layer neural networks over two-layer networks in the feature learning regime.
翻译:深度学习理论的核心问题之一是理解神经网络如何学习层次化特征。深度网络提取显著特征的能力对其卓越的泛化能力以及预训练与微调的现代深度学习范式都至关重要。然而,从理论视角看,这一特征学习过程仍鲜为人知,现有分析大多局限于双层网络。本研究表明,三层神经网络具有可证明比双层网络更丰富的特征学习能力。我们分析了通过逐层梯度下降训练的三层网络所学习的特征,并提出了一个通用定理,该定理在目标函数具有特定层次结构时,为达到低测试误差所需的样本复杂度与网络宽度提供了上界。我们在具体的统计学习场景——单索引模型与二次特征函数——中实例化了我们的框架,并证明在后一场景中,三层网络获得的样本复杂度优于所有现有双层网络的保证。关键在于,这一样本复杂度的改进依赖于三层网络高效学习非线性特征的能力。随后,我们通过构造一个函数,建立了基于优化的具体深度分离:该函数可通过三层网络上的梯度下降高效学习,却无法由双层网络高效学习。我们的工作朝着理解三层神经网络在特征学习机制中相对于双层网络的可证明优势迈出了重要一步。