The spiking neural networks (SNNs) that efficiently encode temporal sequences have shown great potential in extracting audio-visual joint feature representations. However, coupling SNNs (binary spike sequences) with transformers (float-point sequences) to jointly explore the temporal-semantic information still facing challenges. In this paper, we introduce a novel Spiking Tucker Fusion Transformer (STFT) for audio-visual zero-shot learning (ZSL). The STFT leverage the temporal and semantic information from different time steps to generate robust representations. The time-step factor (TSF) is introduced to dynamically synthesis the subsequent inference information. To guide the formation of input membrane potentials and reduce the spike noise, we propose a global-local pooling (GLP) which combines the max and average pooling operations. Furthermore, the thresholds of the spiking neurons are dynamically adjusted based on semantic and temporal cues. Integrating the temporal and semantic information extracted by SNNs and Transformers are difficult due to the increased number of parameters in a straightforward bilinear model. To address this, we introduce a temporal-semantic Tucker fusion module, which achieves multi-scale fusion of SNN and Transformer outputs while maintaining full second-order interactions. Our experimental results demonstrate the effectiveness of the proposed approach in achieving state-of-the-art performance in three benchmark datasets. The harmonic mean (HM) improvement of VGGSound, UCF101 and ActivityNet are around 15.4\%, 3.9\%, and 14.9\%, respectively.
翻译:高效编码时间序列的脉冲神经网络在提取视听联合特征表示方面展现出巨大潜力。然而,将SNN(二进制脉冲序列)与Transformer(浮点序列)耦合以共同探索时序-语义信息仍面临挑战。本文提出一种新颖的脉冲Tucker融合Transformer用于视听零样本学习。STFT利用不同时间步的时序与语义信息生成鲁棒表示。通过引入时间步因子动态合成后续推理信息。为引导输入膜电位形成并降低脉冲噪声,我们提出结合最大池化与平均池化操作的全局-局部池化方法。此外,基于语义与时序线索动态调整脉冲神经元的阈值。由于简单双线性模型参数量的增加,整合SNN与Transformer提取的时序-语义信息存在困难。为此,我们提出时序-语义Tucker融合模块,在保持完整二阶交互的同时实现SNN与Transformer输出的多尺度融合。实验结果表明,所提方法在三个基准数据集上实现了最先进的性能。VGGSound、UCF101和ActivityNet的调和平均数分别提升约15.4%、3.9%和14.9%。