Speech-language models (SLMs) offer a promising path toward unifying speech and text understanding and generation. However, challenges remain in achieving effective cross-modal alignment and high-quality speech generation. In this work, we systematically investigate the role of speech tokenizer designs in LLM-centric SLMs, augmented by speech heads and speaker modeling. We compare coupled, semi-decoupled, and fully decoupled speech tokenizers under a fair SLM framework and find that decoupled tokenization significantly improves alignment and synthesis quality. To address the information density mismatch between speech and text, we introduce multi-token prediction (MTP) into SLMs, enabling each hidden state to decode multiple speech tokens. This leads to up to 12$\times$ faster decoding and a substantial drop in word error rate (from 6.07 to 3.01). Furthermore, we propose a speaker-aware generation paradigm and introduce RoleTriviaQA, a large-scale role-playing knowledge QA benchmark with diverse speaker identities. Experiments demonstrate that our methods enhance both knowledge understanding and speaker consistency.
翻译:语音-语言模型为统一语音与文本的理解与生成提供了可行路径。然而,在实现有效的跨模态对齐与高质量的语音生成方面仍存在挑战。本研究系统性地探究了在以大语言模型为核心的语音-语言模型中,辅以语音头与说话人建模的语音分词器设计所起的作用。我们在一个公平的语音-语言模型框架下比较了耦合式、半解耦式与完全解耦式语音分词器,发现解耦式分词能显著提升对齐与合成质量。为应对语音与文本之间信息密度不匹配的问题,我们将多令牌预测引入语音-语言模型,使每个隐藏状态能够解码多个语音令牌。这实现了高达12倍的解码加速,并显著降低了词错误率(从6.07降至3.01)。此外,我们提出了一种说话人感知的生成范式,并引入了RoleTriviaQA——一个包含多样化说话人身份的大规模角色扮演知识问答基准。实验表明,我们的方法同时提升了知识理解能力与说话人一致性。