The emergence of Large Audio-Language Models (LALMs) has advanced Speech Emotion Recognition (SER), but their size limits deployment in resource-constrained environments. While Knowledge Distillation is effective for LALM compression, existing methods remain underexplored in distilling the cross-modal projection module (Projector), and often struggle with alignment due to differences in feature dimensions. We propose PL-Distill, a KD framework that combines Projector-Level Distillation (PDist) to align audio embeddings and Logits-Level Distillation (LDist) to align output logits. PDist introduces Attention-weighted Centered Kernel Alignment, a novel approach we propose to highlight important time steps and address dimension mismatches. Meanwhile, LDist minimizes the Kullback-Leibler divergence between teacher and student logits from audio and text modalities. On IEMOCAP, RAVDESS, and SAVEE, PL-Distill compresses an 8.4B-parameter teacher to a compact 1.1B-parameter student, consistently outperforming the teacher, state-of-the-art pretrained models, and other KD baselines across all metrics.
翻译:大型音频-语言模型的出现推动了语音情感识别的发展,但其模型规模限制了其在资源受限环境中的部署。虽然知识蒸馏对于大型音频-语言模型的压缩是有效的,但现有方法在蒸馏跨模态投影模块方面仍探索不足,并且常常因特征维度差异而难以实现对齐。我们提出了PL-Distill,一个结合投影层蒸馏和对数层蒸馏的知识蒸馏框架,前者用于对齐音频嵌入,后者用于对齐输出对数。投影层蒸馏引入了注意力加权中心核对齐,这是我们提出的一种新方法,旨在突出重要时间步并解决维度不匹配问题。同时,对数层蒸馏最小化了来自音频和文本模态的教师与学生对数之间的Kullback-Leibler散度。在IEMOCAP、RAVDESS和SAVEE数据集上,PL-Distill将一个84亿参数的教师模型压缩为一个紧凑的11亿参数学生模型,在所有指标上均持续优于教师模型、最先进的预训练模型以及其他知识蒸馏基线方法。