Hyperspectral image (HSI) classification presents unique challenges due to its high spectral dimensionality and limited labeled data. Traditional deep learning models often suffer from overfitting and high computational costs. Self-distillation (SD), a variant of knowledge distillation where a network learns from its own predictions, has recently emerged as a promising strategy to enhance model performance without requiring external teacher networks. In this work, we explore the application of SD to HSI by treating earlier outputs as soft targets, thereby enforcing consistency between intermediate and final predictions. This process improves intra-class compactness and inter-class separability in the learned feature space. Our approach is validated on two benchmark HSI datasets and demonstrates significant improvements in classification accuracy and robustness, highlighting the effectiveness of SD for spectral-spatial learning. Codes are available at https://github.com/Prachet-Dev-Singh/SDHSI.
翻译:高光谱图像(HSI)分类因其高光谱维度和有限的标注数据而面临独特挑战。传统的深度学习模型常受限于过拟合和高计算成本。自蒸馏(SD)作为知识蒸馏的一种变体,其网络从自身预测中学习,近来已成为一种无需外部教师网络即可提升模型性能的有效策略。在本工作中,我们通过将早期输出视为软目标,探索了SD在HSI中的应用,从而强化了中间预测与最终预测之间的一致性。这一过程提升了所学特征空间中的类内紧凑性与类间可分离性。我们的方法在两个基准HSI数据集上得到验证,并在分类精度与鲁棒性方面展现出显著提升,凸显了SD在光谱-空间学习中的有效性。代码发布于 https://github.com/Prachet-Dev-Singh/SDHSI。