Hyperspectral image (HSI) restoration is a fundamental challenge in computational imaging and computer vision. It involves ill-posed inverse problems, such as inpainting and super-resolution. Although deep learning methods have transformed the field through data-driven learning, their effectiveness hinges on access to meticulously curated ground-truth datasets. This fundamentally restricts their applicability in real-world scenarios where such data is unavailable. This paper presents SHARE (Single Hyperspectral Image Restoration with Equivariance), a fully unsupervised framework that unifies geometric equivariance principles with low-rank spectral modelling to eliminate the need for ground truth. SHARE's core concept is to exploit the intrinsic invariance of hyperspectral structures under differentiable geometric transformations (e.g. rotations and scaling) to derive self-supervision signals through equivariance consistency constraints. Our novel Dynamic Adaptive Spectral Attention (DASA) module further enhances this paradigm shift by explicitly encoding the global low-rank property of HSI and adaptively refining local spectral-spatial correlations through learnable attention mechanisms. Extensive experiments on HSI inpainting and super-resolution tasks demonstrate the effectiveness of SHARE. Our method outperforms many state-of-the-art unsupervised approaches and achieves performance comparable to that of supervised methods. We hope that our approach will shed new light on HSI restoration and broader scientific imaging scenarios. The code will be released at https://github.com/xuwayyy/SHARE.
翻译:高光谱图像(HSI)复原是计算成像与计算机视觉领域的一项基础性挑战,涉及图像修复和超分辨率等不适定逆问题。尽管深度学习方法通过数据驱动学习革新了该领域,但其有效性依赖于精心标注的真实数据集的可获得性。这在根本上限制了这些方法在缺乏此类数据的真实场景中的适用性。本文提出SHARE(基于等变性的单幅高光谱图像复原),这是一个完全无监督的框架,它将几何等变性原理与低秩光谱建模相结合,从而无需真实标注数据。SHARE的核心思想是利用高光谱结构在可微几何变换(如旋转和缩放)下的内在不变性,通过等变性一致性约束来获取自监督信号。我们新颖的动态自适应光谱注意力(DASA)模块通过显式编码HSI的全局低秩特性,并利用可学习的注意力机制自适应地优化局部光谱-空间相关性,进一步增强了这一范式转变。在高光谱图像修复和超分辨率任务上的大量实验证明了SHARE的有效性。我们的方法超越了众多先进的无监督方法,并达到了与有监督方法相当的性能。我们希望我们的方法能为HSI复原及更广泛的科学成像场景带来新的启示。代码将在 https://github.com/xuwayyy/SHARE 发布。