Recent advances in GPT-4o like multi-modality models have demonstrated remarkable progress for direct speech-to-speech conversation, with real-time speech interaction experience and strong speech understanding ability. However, current research focuses on discrete speech tokens to align with discrete text tokens for language modelling, which depends on an audio codec with residual connections or independent group tokens, such a codec usually leverages large scale and diverse datasets training to ensure that the discrete speech codes have good representation for varied domain, noise, style data reconstruction as well as a well-designed codec quantizer and encoder-decoder architecture for discrete token language modelling. This paper introduces Flow-Omni, a continuous speech token based GPT-4o like model, capable of real-time speech interaction and low streaming latency. Specifically, first, instead of cross-entropy loss only, we combine flow matching loss with a pretrained autoregressive LLM and a small MLP network to predict the probability distribution of the continuous-valued speech tokens from speech prompt. second, we incorporated the continuous speech tokens to Flow-Omni multi-modality training, thereby achieving robust speech-to-speech performance with discrete text tokens and continuous speech tokens together. Experiments demonstrate that, compared to discrete text and speech multi-modality training and its variants, the continuous speech tokens mitigate robustness issues by avoiding the inherent flaws of discrete speech code's representation loss for LLM.
翻译:近期类似GPT-4o的多模态模型在直接语音对话方面取得了显著进展,实现了实时语音交互体验与强大的语音理解能力。然而,当前研究主要采用离散语音标记以匹配语言建模中的离散文本标记,这种方法依赖于具有残差连接或独立分组标记的音频编解码器。此类编解码器通常需要大规模多样化数据集训练,以确保离散语音编码能够有效表征不同领域、噪声及风格的数据重建,同时需要精心设计的编解码量化器与编解码器架构以支持离散标记的语言建模。本文提出Flow-Omni——一种基于连续语音标记的类GPT-4o模型,能够实现实时语音交互与低流式延迟。具体而言:首先,我们不仅使用交叉熵损失,还结合流匹配损失与预训练自回归大语言模型及小型MLP网络,从语音提示中预测连续值语音标记的概率分布;其次,我们将连续语音标记融入Flow-Omni多模态训练,从而通过离散文本标记与连续语音标记的协同实现鲁棒的语音对话性能。实验表明,相较于离散文本与语音的多模态训练及其变体,连续语音标记通过避免离散语音编码在语言建模中固有的表征损失缺陷,有效缓解了鲁棒性问题。