Vision transformers (ViTs) have demonstrated their superior accuracy for computer vision tasks compared to convolutional neural networks (CNNs). However, ViT models are often computation-intensive for efficient deployment on resource-limited edge devices. This work proposes Quasar-ViT, a hardware-oriented quantization-aware architecture search framework for ViTs, to design efficient ViT models for hardware implementation while preserving the accuracy. First, Quasar-ViT trains a supernet using our row-wise flexible mixed-precision quantization scheme, mixed-precision weight entanglement, and supernet layer scaling techniques. Then, it applies an efficient hardware-oriented search algorithm, integrated with hardware latency and resource modeling, to determine a series of optimal subnets from supernet under different inference latency targets. Finally, we propose a series of model-adaptive designs on the FPGA platform to support the architecture search and mitigate the gap between the theoretical computation reduction and the practical inference speedup. Our searched models achieve 101.5, 159.6, and 251.6 frames-per-second (FPS) inference speed on the AMD/Xilinx ZCU102 FPGA with 80.4%, 78.6%, and 74.9% top-1 accuracy, respectively, for the ImageNet dataset, consistently outperforming prior works.
翻译:视觉Transformer(ViTs)相比卷积神经网络(CNNs)已在计算机视觉任务中展现出更高的精度。然而,ViT模型通常计算密集,难以在资源受限的边缘设备上高效部署。本文提出Quasar-ViT,一种面向硬件的量化感知架构搜索框架,旨在设计适用于硬件实现的高效ViT模型,同时保持精度。首先,Quasar-ViT采用我们提出的行级灵活混合精度量化方案、混合精度权重纠缠及超网络层缩放技术训练超网络。随后,它应用一种高效的面向硬件搜索算法,结合硬件延迟与资源建模,从超网络中根据不同推理延迟目标确定一系列最优子网络。最后,我们在FPGA平台上提出一系列模型自适应设计,以支持架构搜索并缩小理论计算量减少与实际推理加速之间的差距。在ImageNet数据集上,我们搜索得到的模型在AMD/Xilinx ZCU102 FPGA上分别实现了101.5、159.6和251.6帧/秒的推理速度,对应top-1精度为80.4%、78.6%和74.9%,均持续优于先前工作。