Previous works have shown that reducing parameter overhead and computations for transformer-based single image super-resolution (SISR) models (e.g., SwinIR) usually leads to a reduction of performance. In this paper, we present GRFormer, an efficient and lightweight method, which not only reduces the parameter overhead and computations, but also greatly improves performance. The core of GRFormer is Grouped Residual Self-Attention (GRSA), which is specifically oriented towards two fundamental components. Firstly, it introduces a novel grouped residual layer (GRL) to replace the Query, Key, Value (QKV) linear layer in self-attention, aimed at efficiently reducing parameter overhead, computations, and performance loss at the same time. Secondly, it integrates a compact Exponential-Space Relative Position Bias (ES-RPB) as a substitute for the original relative position bias to improve the ability to represent position information while further minimizing the parameter count. Extensive experimental results demonstrate that GRFormer outperforms state-of-the-art transformer-based methods for $\times$2, $\times$3 and $\times$4 SISR tasks, notably outperforming SOTA by a maximum PSNR of 0.23dB when trained on the DIV2K dataset, while reducing the number of parameter and MACs by about \textbf{60\%} and \textbf{49\% } in only self-attention module respectively. We hope that our simple and effective method that can easily applied to SR models based on window-division self-attention can serve as a useful tool for further research in image super-resolution. The code is available at \url{https://github.com/sisrformer/GRFormer}.
翻译:先前的研究表明,降低基于Transformer的单图像超分辨率(SISR)模型(如SwinIR)的参数开销和计算量通常会导致性能下降。本文提出GRFormer,一种高效轻量的方法,它不仅降低了参数开销与计算量,还显著提升了性能。GRFormer的核心是组残差自注意力(GRSA),该方法专门针对两个基础组件进行优化。首先,它引入了一种新颖的组残差层(GRL)以取代自注意力中的查询、键、值(QKV)线性层,旨在同时高效地减少参数开销、计算量与性能损失。其次,它集成了一种紧凑的指数空间相对位置偏置(ES-RPB)以替代原始的相对位置偏置,在进一步减少参数量的同时提升位置信息的表征能力。大量实验结果表明,GRFormer在×2、×3和×4 SISR任务上优于当前最先进的基于Transformer的方法,在DIV2K数据集上训练时,其峰值信噪比(PSNR)最高超出SOTA方法0.23dB,同时仅在自注意力模块中就将参数数量和乘加运算量分别降低了约**60%**和**49%**。我们希望这一简单有效、可轻松应用于基于窗口划分自注意力的超分辨率模型的方法,能够为图像超分辨率的进一步研究提供有益工具。代码发布于\url{https://github.com/sisrformer/GRFormer}。