Large Language Models (LLMs) have recently garnered significant attention, primarily for their capabilities in text-based interactions. However, natural human interaction often relies on speech, necessitating a shift towards voice-based models. A straightforward approach to achieve this involves a pipeline of ``Automatic Speech Recognition (ASR) + LLM + Text-to-Speech (TTS)", where input speech is transcribed to text, processed by an LLM, and then converted back to speech. Despite being straightforward, this method suffers from inherent limitations, such as information loss during modality conversion and error accumulation across the three stages. To address these issues, Speech Language Models (SpeechLMs) -- end-to-end models that generate speech without converting from text -- have emerged as a promising alternative. This survey paper provides the first comprehensive overview of recent methodologies for constructing SpeechLMs, detailing the key components of their architecture and the various training recipes integral to their development. Additionally, we systematically survey the various capabilities of SpeechLMs, categorize the evaluation metrics for SpeechLMs, and discuss the challenges and future research directions in this rapidly evolving field.
翻译:大型语言模型(LLMs)近来因其在文本交互方面的能力而受到广泛关注。然而,自然的人类交互通常依赖于语音,这促使研究向基于语音的模型转变。实现这一目标的一种直接方法是采用“自动语音识别(ASR)+ LLM + 文本转语音(TTS)”的流水线,即输入语音被转录为文本,由LLM处理,然后再转换回语音。尽管这种方法简单直接,但其存在固有的局限性,例如模态转换过程中的信息丢失以及三个阶段中的误差累积。为解决这些问题,语音语言模型(SpeechLMs)——无需从文本转换即可生成语音的端到端模型——已成为一种有前景的替代方案。本综述论文首次全面概述了构建SpeechLMs的最新方法,详细阐述了其架构的关键组成部分以及开发过程中不可或缺的各种训练方案。此外,我们系统性地综述了SpeechLMs的各种能力,对SpeechLMs的评估指标进行了分类,并讨论了这一快速发展领域所面临的挑战及未来的研究方向。