In recent years, Transformers-based models have made significant progress in the field of image restoration by leveraging their inherent ability to capture complex contextual features. Recently, Mamba models have made a splash in the field of computer vision due to their ability to handle long-range dependencies and their significant computational efficiency compared to Transformers. However, Mamba currently lags behind Transformers in contextual learning capabilities. To overcome the limitations of these two models, we propose a Mamba-Transformer hybrid image restoration model called MatIR. Specifically, MatIR cross-cycles the blocks of the Transformer layer and the Mamba layer to extract features, thereby taking full advantage of the advantages of the two architectures. In the Mamba module, we introduce the Image Inpainting State Space (IRSS) module, which traverses along four scan paths to achieve efficient processing of long sequence data. In the Transformer module, we combine triangular window-based local attention with channel-based global attention to effectively activate the attention mechanism over a wider range of image pixels. Extensive experimental results and ablation studies demonstrate the effectiveness of our approach.
翻译:近年来,基于Transformer的模型凭借其固有的捕获复杂上下文特征的能力,在图像复原领域取得了显著进展。最近,Mamba模型因其处理长程依赖的能力以及与Transformer相比显著的计算效率,在计算机视觉领域引起了广泛关注。然而,Mamba目前在上下文学习能力方面仍落后于Transformer。为了克服这两种模型的局限性,我们提出了一种名为MatIR的Mamba-Transformer混合图像复原模型。具体而言,MatIR通过交叉循环Transformer层和Mamba层的模块来提取特征,从而充分利用两种架构的优势。在Mamba模块中,我们引入了图像修复状态空间(IRSS)模块,该模块沿四条扫描路径遍历,以实现对长序列数据的高效处理。在Transformer模块中,我们将基于三角窗口的局部注意力与基于通道的全局注意力相结合,以在更广泛的图像像素范围内有效激活注意力机制。大量的实验结果和消融研究证明了我们方法的有效性。