Transformers have established themselves as the leading neural network model in natural language processing and are increasingly foundational in various domains. In vision, the MLP-Mixer model has demonstrated competitive performance, suggesting that attention mechanisms might not be indispensable. Inspired by this, recent research has explored replacing attention modules with other mechanisms, including those described by MetaFormers. However, the theoretical framework for these models remains underdeveloped. This paper proposes a novel perspective by integrating Krotov's hierarchical associative memory with MetaFormers, enabling a comprehensive representation of the entire Transformer block, encompassing token-/channel-mixing modules, layer normalization, and skip connections, as a single Hopfield network. This approach yields a parallelized MLP-Mixer derived from a three-layer Hopfield network, which naturally incorporates symmetric token-/channel-mixing modules and layer normalization. Empirical studies reveal that symmetric interaction matrices in the model hinder performance in image recognition tasks. Introducing symmetry-breaking effects transitions the performance of the symmetric parallelized MLP-Mixer to that of the vanilla MLP-Mixer. This indicates that during standard training, weight matrices of the vanilla MLP-Mixer spontaneously acquire a symmetry-breaking configuration, enhancing their effectiveness. These findings offer insights into the intrinsic properties of Transformers and MLP-Mixers and their theoretical underpinnings, providing a robust framework for future model design and optimization.
翻译:Transformer已成为自然语言处理领域的主导神经网络模型,并在多个领域日益成为基础架构。在视觉任务中,MLP-Mixer模型展现出具有竞争力的性能,表明注意力机制可能并非不可或缺。受此启发,近期研究探索用其他机制替代注意力模块,包括MetaFormers所描述的架构。然而,这些模型的理论框架仍不完善。本文提出一种新颖视角,将Krotov的层次化联想记忆与MetaFormers相结合,从而将整个Transformer模块(包括token/channel混合模块、层归一化及跳跃连接)统一表示为单个Hopfield网络。该方法从三层Hopfield网络推导出并行化MLP-Mixer,其自然包含对称的token/channel混合模块与层归一化。实证研究表明,模型中对称的交互矩阵会阻碍图像识别任务的性能。引入对称性破缺效应后,对称并行化MLP-Mixer的性能可过渡至原始MLP-Mixer的水平。这表明在标准训练过程中,原始MLP-Mixer的权重矩阵会自发获得对称性破缺构型,从而提升其有效性。这些发现为理解Transformer与MLP-Mixer的内在特性及其理论基础提供了新见解,为未来模型设计与优化建立了坚实的理论框架。