Vision Transformers (ViTs) that leverage self-attention mechanism have shown superior performance on many classical vision tasks compared to convolutional neural networks (CNNs) and gain increasing popularity recently. Existing ViTs works mainly optimize performance and accuracy, but ViTs reliability issues induced by soft errors in large-scale VLSI designs have generally been overlooked. In this work, we mainly study the reliability of ViTs and investigate the vulnerability from different architecture granularities ranging from models, layers, modules, and patches for the first time. The investigation reveals that ViTs with the self-attention mechanism are generally more resilient on linear computing including general matrix-matrix multiplication (GEMM) and full connection (FC) and show a relatively even vulnerability distribution across the patches. ViTs involve more fragile non-linear computing such as softmax and GELU compared to typical CNNs. With the above observations, we propose a lightweight block-wise algorithm-based fault tolerance (LB-ABFT) approach to protect the linear computing implemented with distinct sizes of GEMM and apply a range-based protection scheme to mitigate soft errors in non-linear computing. According to our experiments, the proposed fault-tolerant approaches enhance ViTs accuracy significantly with minor computing overhead in presence of various soft errors.
翻译:视觉Transformer(ViTs)利用自注意力机制,在诸多经典视觉任务中展现出优于卷积神经网络(CNNs)的性能,近年来日益普及。现有ViTs研究主要侧重于优化性能与准确率,但大规模VLSI设计中由软错误引发的ViTs可靠性问题通常被忽视。本研究首次从模型、层、模块和分块等不同架构粒度,系统研究ViTs的可靠性并探究其脆弱性。研究表明,采用自注意力机制的ViTs在通用矩阵乘法(GEMM)和全连接(FC)等线性计算中普遍更具韧性,且各分块间脆弱性分布相对均衡。与典型CNNs相比,ViTs包含更多如softmax和GELU等脆弱的非线性计算。基于上述发现,我们提出轻量级块级算法容错(LB-ABFT)方法,用于保护不同规模GEMM实现的线性计算,并采用基于范围的保护方案缓解非线性计算中的软错误。实验表明,在存在各种软错误的情况下,所提容错方法能以极小的计算开销显著提升ViTs准确率。