In deep learning, a central issue is to understand how neural networks efficiently learn high-dimensional features. To this end, we explore the gradient descent learning of a general Gaussian Multi-index model $f(\boldsymbol{x})=g(\boldsymbol{U}\boldsymbol{x})$ with hidden subspace $\boldsymbol{U}\in \mathbb{R}^{r\times d}$, which is the canonical setup to study representation learning. We prove that under generic non-degenerate assumptions on the link function, a standard two-layer neural network trained via layer-wise gradient descent can agnostically learn the target with $o_d(1)$ test error using $\widetilde{\mathcal{O}}(d)$ samples and $\widetilde{\mathcal{O}}(d^2)$ time. The sample and time complexity both align with the information-theoretic limit up to leading order and are therefore optimal. During the first stage of gradient descent learning, the proof proceeds via showing that the inner weights can perform a power-iteration process. This process implicitly mimics a spectral start for the whole span of the hidden subspace and eventually eliminates finite-sample noise and recovers this span. It surprisingly indicates that optimal results can only be achieved if the first layer is trained for more than $\mathcal{O}(1)$ steps. This work demonstrates the ability of neural networks to effectively learn hierarchical functions with respect to both sample and time efficiency.
翻译:在深度学习中,一个核心问题是理解神经网络如何高效学习高维特征。为此,我们研究一般高斯多指标模型 $f(\boldsymbol{x})=g(\boldsymbol{U}\boldsymbol{x})$ 的梯度下降学习过程,其中隐藏子空间为 $\boldsymbol{U}\in \mathbb{R}^{r\times d}$,这是研究表示学习的典型设置。我们证明,在连接函数满足一般非退化假设的条件下,通过分层梯度下降训练的标准双层神经网络能够以 $\widetilde{\mathcal{O}}(d)$ 的样本量和 $\widetilde{\mathcal{O}}(d^2)$ 的时间,以 $o_d(1)$ 的测试误差不可知地学习目标函数。样本复杂度和时间复杂度均与信息论极限在主导阶上一致,因此是最优的。在梯度下降学习的第一阶段,证明通过展示内部权重能够执行幂迭代过程进行。该过程隐式地模拟了隐藏子空间整个张成空间的谱初始化,最终消除了有限样本噪声并恢复了该张成空间。这意外地表明,只有当第一层训练超过 $\mathcal{O}(1)$ 步时,才能达到最优结果。本工作证明了神经网络在样本效率和时间效率两方面有效学习层次函数的能力。