Spectral imaging offers promising applications across diverse domains, including medicine and urban scene understanding, and is already established as a critical modality in remote sensing. However, variability in channel dimensionality and captured wavelengths among spectral cameras impede the development of AI-driven methodologies, leading to camera-specific models with limited generalizability and inadequate cross-camera applicability. To address this bottleneck, we introduce CARL, a model for Camera-Agnostic Representation Learning across RGB, multispectral, and hyperspectral imaging modalities. To enable the conversion of a spectral image with any channel dimensionality to a camera-agnostic representation, we introduce a novel spectral encoder, featuring a self-attention-cross-attention mechanism, to distill salient spectral information into learned spectral representations. Spatio-spectral pre-training is achieved with a novel feature-based self-supervision strategy tailored to CARL. Large-scale experiments across the domains of medical imaging, autonomous driving, and satellite imaging demonstrate our model's unique robustness to spectral heterogeneity, outperforming on datasets with simulated and real-world cross-camera spectral variations. The scalability and versatility of the proposed approach position our model as a backbone for future spectral foundation models. Code and model weights are publicly available at https://github.com/IMSY-DKFZ/CARL.
翻译:光谱成像在医学和城市场景理解等多个领域展现出广阔应用前景,并已成为遥感领域的关键模态。然而,光谱相机在通道维度和捕获波长方面的差异性阻碍了人工智能方法的发展,导致现有模型局限于特定相机、泛化能力不足且跨相机适用性有限。为突破这一瓶颈,我们提出CARL模型,实现跨RGB、多光谱与高光谱成像模态的相机无关表征学习。为将任意通道维度的光谱图像转换为相机无关表征,我们设计了一种新型光谱编码器,该编码器采用自注意力-交叉注意力机制,将显著光谱信息提炼为可学习的光谱表征。通过专为CARL设计的基于特征的自监督策略,实现了空间-光谱联合预训练。在医学影像、自动驾驶和卫星成像领域的大规模实验表明,本模型对光谱异质性具有独特鲁棒性,在模拟和真实跨相机光谱变化的数据集上均表现优异。所提方法兼具可扩展性与通用性,使其成为未来光谱基础模型的骨干架构。代码与模型权重已公开于https://github.com/IMSY-DKFZ/CARL。