Recent years have witnessed a trend that large language model (LLM) based text-to-speech (TTS) emerges into the mainstream due to their high naturalness and zero-shot capacity. In this paradigm, speech signals are discretized into token sequences, which are modeled by an LLM with text as prompts and reconstructed by a token-based vocoder to waveforms. Obviously, speech tokens play a critical role in LLM-based TTS models. Current speech tokens are learned in an unsupervised manner, which lacks explicit semantic information and alignment to the text. In this paper, we propose to represent speech with supervised semantic tokens, which are derived from a multilingual speech recognition model by inserting vector quantization into the encoder. Based on the tokens, we further propose a scalable zero-shot TTS synthesizer, CosyVoice, which consists of an LLM for text-to-token generation and a conditional flow matching model for token-to-speech synthesis. Experimental results show that supervised semantic tokens significantly outperform existing unsupervised tokens in terms of content consistency and speaker similarity for zero-shot voice cloning. Moreover, we find that utilizing large-scale data further improves the synthesis performance, indicating the scalable capacity of CosyVoice. To the best of our knowledge, this is the first attempt to involve supervised speech tokens into TTS models.
翻译:近年来,基于大语言模型(LLM)的文本到语音(TTS)系统因其高自然度和零样本能力而逐渐成为主流。在该范式中,语音信号被离散化为令牌序列,由LLM以文本为提示进行建模,并通过基于令牌的声码器重建为波形。显然,语音令牌在基于LLM的TTS模型中起着关键作用。当前的语音令牌以无监督方式学习,缺乏明确的语义信息以及与文本的对齐。本文提出使用监督语义令牌来表示语音,这些令牌通过在编码器中插入向量量化,从多语言语音识别模型中导出。基于这些令牌,我们进一步提出了一种可扩展的零样本TTS合成器CosyVoice,它包含一个用于文本到令牌生成的LLM和一个用于令牌到语音合成的条件流匹配模型。实验结果表明,在零样本语音克隆的内容一致性和说话人相似度方面,监督语义令牌显著优于现有的无监督令牌。此外,我们发现利用大规模数据可进一步提升合成性能,这表明了CosyVoice的可扩展能力。据我们所知,这是首次尝试将监督语音令牌引入TTS模型。