Current deep learning models for Multispectral and Hyperspectral Image Fusion (MS/HS fusion) are typically designed for fixed spectral bands and spatial scales, which limits their transferability across diverse sensors. To address this, we propose SSA, a universal framework for MS/HS fusion with spectral-band and fusion-scale agnosticism. Specifically, we introduce Matryoshka Kernel (MK), a novel operator that enables a single model to adapt to arbitrary numbers of spectral channels. Meanwhile, we build SSA upon an Implicit Neural Representation (INR) backbone that models the HS signal as a continuous function, enabling reconstruction at arbitrary spatial resolutions. Together, these two forms of agnosticism enable a single MS/HS fusion model that generalizes effectively to unseen sensors and spatial scales. Extensive experiments demonstrate that our single model achieves state-of-the-art performance while generalizing well to unseen sensors and scales, paving the way toward future HS foundation models.
翻译:当前用于多光谱与高光谱图像融合的深度学习模型通常针对固定谱段和空间尺度设计,这限制了其在不同传感器间的可迁移性。为解决此问题,我们提出SSA——一种具有谱段与融合尺度无关性的通用MS/HS融合框架。具体而言,我们引入嵌套核(Matryoshka Kernel, MK)这一新型算子,使单一模型能够适应任意数量的光谱通道。同时,我们将SSA构建于隐式神经表示(INR)骨干网络之上,该网络将HS信号建模为连续函数,从而实现在任意空间分辨率下的重建。这两种无关性机制共同作用,使得单一MS/HS融合模型能够有效泛化至未见过的传感器和空间尺度。大量实验表明,我们的单一模型在取得最先进性能的同时,对未见过的传感器和尺度具有良好的泛化能力,为未来高光谱基础模型的发展开辟了道路。