Recent years have witnessed a trend that large language model (LLM) based text-to-speech (TTS) emerges into the mainstream due to their high naturalness and zero-shot capacity. In this paradigm, speech signals are discretized into token sequences, which are modeled by an LLM with text as prompts and reconstructed by a token-based vocoder to waveforms. Obviously, speech tokens play a critical role in LLM-based TTS models. Current speech tokens are learned in an unsupervised manner, which lacks explicit semantic information and alignment to the text. In this paper, we propose to represent speech with supervised semantic tokens, which are derived from a multilingual speech recognition model by inserting vector quantization into the encoder. Based on the tokens, we further propose a scalable zero-shot TTS synthesizer, CosyVoice, which consists of an LLM for text-to-token generation and a conditional flow matching model for token-to-speech synthesis. Experimental results show that supervised semantic tokens significantly outperform existing unsupervised tokens in terms of content consistency and speaker similarity for zero-shot voice cloning. Moreover, we find that utilizing large-scale data further improves the synthesis performance, indicating the scalable capacity of CosyVoice. To the best of our knowledge, this is the first attempt to involve supervised speech tokens into TTS models.
翻译:近年来,基于大语言模型(LLM)的文本到语音(TTS)技术因其高自然度和零样本能力而成为主流趋势。在此范式中,语音信号被离散化为标记序列,由LLM以文本为提示进行建模,并通过基于标记的声码器重建为波形。显然,语音标记在基于LLM的TTS模型中起着关键作用。当前的语音标记以无监督方式学习,缺乏明确的语义信息以及与文本的对齐。本文提出用监督语义标记表示语音,这些标记通过在多语言语音识别模型的编码器中插入向量量化而获得。基于这些标记,我们进一步提出了一种可扩展的零样本TTS合成器CosyVoice,它包含一个用于文本到标记生成的LLM和一个用于标记到语音合成的条件流匹配模型。实验结果表明,在零样本语音克隆的内容一致性和说话人相似度方面,监督语义标记显著优于现有的无监督标记。此外,我们发现利用大规模数据可进一步提升合成性能,这体现了CosyVoice的可扩展能力。据我们所知,这是首次尝试将监督语音标记引入TTS模型。