One of the primary reasons behind the success of neural networks has been the emergence of an array of new, highly-successful optimizers, perhaps most importantly the Adam optimizer. It is widely used for training neural networks, yet notoriously hard to interpret. Lacking a clear physical intuition, Adam is difficult to generalize to manifolds. Some attempts have been made to directly apply parts of the Adam algorithm to manifolds or to find an underlying structure, but a full generalization has remained elusive. In this work a new approach is presented that leverages the special structure of the manifolds which are relevant for optimization of neural networks, such as the Stiefel manifold, the symplectic Stiefel manifold, the Grassmann manifold and the symplectic Grassmann manifold: all of these are homogeneous spaces and as such admit a global tangent space representation. This global tangent space representation is used to perform all of the steps in the Adam optimizer and we are able to fully generalize the optimizer to manifolds without a projection step. The resulting algorithm is then applied to train a transformer for which orthogonality constraints are enforced up to machine precision and we observe significant speed-ups in the training process.
翻译:神经网络成功的主要原因之一是涌现出一系列新型高效的优化器,其中最重要的当属Adam优化器。它被广泛应用于神经网络的训练,但其原理却难以解释。由于缺乏清晰的物理直觉,Adam难以推广至流形。已有研究尝试将Adam算法的部分步骤直接应用于流形,或探寻其底层结构,但完整的推广方案始终未能实现。本研究提出一种新方法,该方法利用与神经网络优化相关的特定流形结构——如Stiefel流形、辛Stiefel流形、Grassmann流形和辛Grassmann流形——这些流形均为齐性空间,因而具有全局切空间表示。我们利用该全局切空间表示来执行Adam优化器的所有步骤,从而实现了无需投影步骤的完整流形推广。最终算法被应用于训练Transformer模型,其中正交性约束达到机器精度,我们观察到训练过程获得了显著的加速效果。