Although Multimodal Large Language Models (MLLMs) have demonstrated remarkable capabilities in vision, language, and video understanding tasks, scaling them to long-form speech remains a critical bottleneck due to the explosive growth of input tokens. Existing speech-language models typically project high-frame-rate acoustic features directly into the LLM input space, rendering long-context processing computationally prohibitive as audio duration increases. In this paper, we present FastSLM, a token-efficient architecture designed to overcome this scalability limit through extreme temporal compression. At its core is the Hierarchical Frame Querying Transformer (HFQ-Former), which progressively distills local acoustic details into compact, semantically rich representations across multiple temporal scales. This hierarchical abstraction reduces the speech representation rate to just 1.67 tokens per second, achieving a 93 percent reduction in tokens compared to standard frame-level adapters, while preserving the critical context required for complex reasoning. Experimental results demonstrate that FastSLM achieves competitive performance with state-of-the-art models on long-form benchmarks, despite operating with significantly lower FLOPs and parameter counts. Our findings establish that extreme token compression is a viable pathway to making real-time, long-context speech understanding feasible for LLMs, even under strict computational constraints. The source code and model checkpoints are available at https://anonymous.4open.science/r/FastSLM-8BD3
翻译:尽管多模态大语言模型(MLLMs)在视觉、语言和视频理解任务中展现出卓越能力,但由于输入令牌数量的爆炸性增长,将其扩展至长语音理解仍是一个关键瓶颈。现有的语音-语言模型通常将高帧率的声学特征直接投影到LLM输入空间中,导致随着音频时长增加,长上下文处理在计算上变得不可行。本文提出FastSLM,一种令牌高效的架构,旨在通过极端时间压缩克服这一可扩展性限制。其核心是层次化帧查询Transformer(HFQ-Former),该模块在多个时间尺度上逐步将局部声学细节提炼为紧凑且语义丰富的表示。这种层次化抽象将语音表示率降至仅每秒1.67个令牌,与标准帧级适配器相比实现了93%的令牌减少,同时保留了复杂推理所需的关键上下文。实验结果表明,FastSLM在长语音基准测试中与最先进模型取得了具有竞争力的性能,尽管其计算量(FLOPs)和参数量显著更低。我们的研究证实,极端令牌压缩是实现LLM实时、长上下文语音理解的可行途径,即使在严格的计算约束下亦然。源代码和模型检查点可在 https://anonymous.4open.science/r/FastSLM-8BD3 获取。