Transformers achieve state-of-the-art accuracy and robustness across many tasks, but an understanding of the inductive biases that they have and how those biases are different from other neural network architectures remains elusive. Various neural network architectures such as fully connected networks have been found to have a simplicity bias towards simple functions of the data; one version of this simplicity bias is a spectral bias to learn simple functions in the Fourier space. In this work, we identify the notion of sensitivity of the model to random changes in the input as a notion of simplicity bias which provides a unified metric to explain the simplicity and spectral bias of transformers across different data modalities. We show that transformers have lower sensitivity than alternative architectures, such as LSTMs, MLPs and CNNs, across both vision and language tasks. We also show that low-sensitivity bias correlates with improved robustness; furthermore, it can also be used as an efficient intervention to further improve the robustness of transformers.
翻译:Transformer在众多任务中取得了最先进的准确率和鲁棒性,但关于其归纳偏好以及这些偏好与其他神经网络架构的区别仍难以捉摸。研究发现包括全连接网络在内的多种神经网络架构存在对数据简单函数的简单性偏好;这种简单性偏好的一种表现形式是在傅里叶空间中学习简单函数的频谱偏好。在本工作中,我们将模型对输入随机变化的灵敏度概念识别为一种简单性偏好,这为解释Transformer在不同数据模态下的简单性偏好和频谱偏好提供了统一度量。我们证明,在视觉和语言任务中,Transformer相较于LSTM、MLP和CNN等替代架构具有更低的灵敏度。我们还表明,低灵敏度偏好与鲁棒性提升相关;此外,它还可作为一种高效干预手段进一步提升Transformer的鲁棒性。