Our theoretical understanding of neural networks is lagging behind their empirical success. One of the important unexplained phenomena is why and how, during the process of training with gradient descent, the theoretical capacity of neural networks is reduced to an effective capacity that fits the task. We here investigate the mechanism by which gradient descent achieves this through analyzing the learning dynamics at the level of individual neurons in single hidden layer ReLU networks. We identify three dynamical principles -- mutual alignment, unlocking and racing -- that together explain why we can often successfully reduce capacity after training through the merging of equivalent neurons or the pruning of low norm weights. We specifically explain the mechanism behind the lottery ticket conjecture, or why the specific, beneficial initial conditions of some neurons lead them to obtain higher weight norms.
翻译:我们对神经网络的理论理解滞后于其经验成功。一个重要且尚未解释的现象是:在梯度下降训练过程中,神经网络的理论容量为何以及如何被降低至适应任务的有效容量。本文通过分析单隐藏层ReLU网络中单个神经元的学习动态,研究梯度下降实现这一过程的机制。我们识别出三个动态原理——相互对齐、解锁与竞赛——共同解释了为何我们经常能在训练后通过合并等效神经元或修剪低范数权重来成功降低容量。我们特别解释了彩票假设背后的机制,即为何某些神经元特定的有利初始条件会导致其获得更高的权重范数。