The development of learning-based hyperspectral image (HSI) compression models has recently attracted significant interest. Existing models predominantly utilize convolutional filters, which capture only local dependencies. Furthermore,they often incur high training costs and exhibit substantial computational complexity. To address these limitations, in this paper we propose Hyperspectral Compression Transformer (HyCoT) that is a transformer-based autoencoder for pixelwise HSI compression. Additionally, we apply a simple yet effective training set reduction approach to accelerate the training process. Experimental results on the HySpecNet-11k dataset demonstrate that HyCoT surpasses the state of the art across various compression ratios by over 1 dB of PSNR with significantly reduced computational requirements. Our code and pre-trained weights are publicly available at https://git.tu-berlin.de/rsim/hycot .
翻译:近年来,基于学习的高光谱图像压缩模型的发展引起了广泛关注。现有模型主要采用卷积滤波器,其仅能捕获局部依赖关系。此外,这些模型通常训练成本高昂,且计算复杂度显著。为应对这些局限性,本文提出了一种基于Transformer的自编码器——高光谱压缩Transformer,用于逐像素的高光谱图像压缩。同时,我们采用了一种简单而有效的训练集缩减方法以加速训练过程。在HySpecNet-11k数据集上的实验结果表明,HyCoT在多种压缩比下均优于现有最优方法,其PSNR提升超过1 dB,同时计算需求显著降低。我们的代码与预训练权重已公开于 https://git.tu-berlin.de/rsim/hycot 。