This paper presents a novel approach, Spectral-Interpretable and -Enhanced Transformer (SIEFormer), which leverages spectral analysis to reinterpret the attention mechanism within Vision Transformer (ViT) and enhance feature adaptability, with particular emphasis on challenging Generalized Category Discovery (GCD) tasks. The proposed SIEFormer is composed of two main branches, each corresponding to an implicit and explicit spectral perspective of the ViT, enabling joint optimization. The implicit branch realizes the use of different types of graph Laplacians to model the local structure correlations of tokens, along with a novel Band-adaptive Filter (BaF) layer that can flexibly perform both band-pass and band-reject filtering. The explicit branch, on the other hand, introduces a Maneuverable Filtering Layer (MFL) that learns global dependencies among tokens by applying the Fourier transform to the input ``value" features, modulating the transformed signal with a set of learnable parameters in the frequency domain, and then performing an inverse Fourier transform to obtain the enhanced features. Extensive experiments reveal state-of-the-art performance on multiple image recognition datasets, reaffirming the superiority of our approach through ablation studies and visualizations.
翻译:本文提出了一种新颖方法——光谱可解释与增强型Transformer(SIEFormer),该方法利用光谱分析重新诠释视觉Transformer(ViT)中的注意力机制并增强特征适应性,尤其侧重于具有挑战性的广义类别发现(GCD)任务。所提出的SIEFormer由两个主要分支构成,分别对应ViT的隐式与显式光谱视角,实现联合优化。隐式分支通过采用不同类型的图拉普拉斯矩阵对token的局部结构相关性进行建模,并配备新型带自适应滤波(BaF)层,该层可灵活执行带通与带阻滤波。显式分支则引入了可操纵滤波层(MFL),通过对输入“值”特征进行傅里叶变换,在频域使用一组可学习参数调制变换后的信号,再执行逆傅里叶变换获得增强特征,从而学习token间的全局依赖关系。大量实验在多个图像识别数据集上展现了最先进的性能,通过消融研究与可视化分析进一步证实了本方法的优越性。