We introduce Speech Information Retrieval (SIR), a new long-context task for Speech Large Language Models (Speech LLMs), and present SPIRAL, a 1,012-sample benchmark testing models' ability to extract critical details from approximately 90-second spoken inputs. While current Speech LLMs excel at short-form tasks, they struggle with the computational and representational demands of longer audio sequences. To address this limitation, we propose SpeechPrune, a training-free token pruning strategy that uses speech-text similarity and approximated attention scores to efficiently discard irrelevant tokens. In SPIRAL, SpeechPrune achieves accuracy improvements of 29% and up to 47% over the original model and the random pruning model at a pruning rate of 20%, respectively. SpeechPrune can maintain network performance even at a pruning level of 80%. This approach highlights the potential of token-level pruning for efficient and scalable long-form speech understanding.
翻译:本文提出了语音信息检索这一面向语音大语言模型的新型长上下文任务,并构建了包含1,012个样本的SPIRAL基准测试集,用于评估模型从约90秒语音输入中提取关键信息的能力。当前语音大语言模型虽在短语音任务中表现优异,但难以应对长音频序列的计算与表征需求。为此,我们提出SpeechPrune——一种无需训练的令牌剪枝策略,该策略通过语音-文本相似度与近似注意力分数有效剔除无关令牌。在SPIRAL测试中,当剪枝率为20%时,SpeechPrune相较于原始模型与随机剪枝模型分别实现了29%与最高47%的准确率提升。即使在80%的剪枝率下,SpeechPrune仍能保持网络性能。该方法揭示了令牌级剪枝在实现高效可扩展长语音理解方面的潜力。