Look-Up Table based methods have emerged as a promising direction for efficient image restoration tasks. Recent LUT-based methods focus on improving their performance by expanding the receptive field. However, they inevitably introduce extra computational and storage overhead, which hinders their deployment in edge devices. To address this issue, we propose ShiftLUT, a novel framework that attains the largest receptive field among all LUT-based methods while maintaining high efficiency. Our key insight lies in three complementary components. First, Learnable Spatial Shift module (LSS) is introduced to expand the receptive field by applying learnable, channel-wise spatial offsets on feature maps. Second, we propose an asymmetric dual-branch architecture that allocates more computation to the information-dense branch, substantially reducing inference latency without compromising restoration quality. Finally, we incorporate a feature-level LUT compression strategy called Error-bounded Adaptive Sampling (EAS) to minimize the storage overhead. Compared to the previous state-of-the-art method TinyLUT, ShiftLUT achieves a 3.8$\times$ larger receptive field and improves an average PSNR by over 0.21 dB across multiple standard benchmarks, while maintaining a small storage size and inference time.
翻译:基于查找表的方法已成为高效图像复原任务中一个前景广阔的研究方向。近期基于查找表的方法主要通过扩大感受野来提升性能,但这不可避免地会引入额外的计算与存储开销,阻碍其在边缘设备上的部署。为解决这一问题,我们提出ShiftLUT,这是一个新颖的框架,在所有基于查找表的方法中获得了最大的感受野,同时保持了高效率。我们的核心思路包含三个互补的组成部分。首先,引入可学习空间移位模块,通过对特征图施加可学习的通道级空间偏移来扩大感受野。其次,我们提出一种非对称双分支架构,将更多计算资源分配给信息密集的分支,从而在不影响复原质量的前提下显著降低推理延迟。最后,我们采用了一种称为误差界自适应采样的特征级查找表压缩策略,以最小化存储开销。与先前最先进的方法TinyLUT相比,ShiftLUT实现了3.8倍的感受野扩大,并在多个标准基准测试中将平均峰值信噪比提升了超过0.21 dB,同时保持了较小的存储占用和推理时间。