Transformers have demonstrated great power in the recent development of large foundational models. In particular, the Vision Transformer (ViT) has brought revolutionary changes to the field of vision, achieving significant accomplishments on the experimental side. However, their theoretical capabilities, particularly in terms of generalization when trained to overfit training data, are still not fully understood. To address this gap, this work delves deeply into the benign overfitting perspective of transformers in vision. To this end, we study the optimization of a Transformer composed of a self-attention layer with softmax followed by a fully connected layer under gradient descent on a certain data distribution model. By developing techniques that address the challenges posed by softmax and the interdependent nature of multiple weights in transformer optimization, we successfully characterized the training dynamics and achieved generalization in post-training. Our results establish a sharp condition that can distinguish between the small test error phase and the large test error regime, based on the signal-to-noise ratio in the data model. The theoretical results are further verified by experimental simulation.
翻译:Transformer在近期大型基础模型的发展中展现出强大能力。特别是视觉Transformer(ViT)为视觉领域带来了革命性变化,在实验层面取得了显著成就。然而,其理论能力,尤其是在训练至过拟合训练数据时的泛化性能,尚未得到充分理解。为填补这一空白,本研究深入探讨视觉Transformer的良性过拟合视角。为此,我们在特定数据分布模型上,研究由带softmax的自注意力层和全连接层构成的Transformer在梯度下降下的优化过程。通过开发应对softmax挑战及Transformer优化中多权重相互依赖性的技术,我们成功刻画了训练动态并实现了训练后的泛化能力。我们的研究建立了一个基于数据模型中信噪比的严格条件,能够区分小测试误差阶段与大测试误差区域。理论结果进一步通过实验模拟得到验证。