With the help of discrete neural audio codecs, large language models (LLM) have increasingly been recognized as a promising methodology for zero-shot Text-to-Speech (TTS) synthesis. However, sampling based decoding strategies bring astonishing diversity to generation, but also pose robustness issues such as typos, omissions and repetition. In addition, the high sampling rate of audio also brings huge computational overhead to the inference process of autoregression. To address these issues, we propose VALL-E R, a robust and efficient zero-shot TTS system, building upon the foundation of VALL-E. Specifically, we introduce a phoneme monotonic alignment strategy to strengthen the connection between phonemes and acoustic sequence, ensuring a more precise alignment by constraining the acoustic tokens to match their associated phonemes. Furthermore, we employ a codec-merging approach to downsample the discrete codes in shallow quantization layer, thereby accelerating the decoding speed while preserving the high quality of speech output. Benefiting from these strategies, VALL-E R obtains controllablity over phonemes and demonstrates its strong robustness by approaching the WER of ground truth. In addition, it requires fewer autoregressive steps, with over 60% time reduction during inference. This research has the potential to be applied to meaningful projects, including the creation of speech for those affected by aphasia. Audio samples will be available at: https://aka.ms/valler.
翻译:借助离散神经音频编解码器,大语言模型(LLM)已逐渐被视为零样本文本到语音(TTS)合成的一种有前景的方法论。然而,基于采样的解码策略虽为生成过程带来了惊人的多样性,但也引发了诸如错读、漏读和重复等鲁棒性问题。此外,音频的高采样率也为自回归推理过程带来了巨大的计算开销。为解决这些问题,我们在VALL-E的基础上提出了VALL-E R,一个鲁棒且高效的零样本TTS系统。具体而言,我们引入了音素单调对齐策略,通过约束声学标记与其关联音素相匹配,以加强音素与声学序列之间的关联,确保更精确的对齐。此外,我们采用编解码器合并方法对浅层量化层的离散编码进行下采样,从而在保持语音输出高质量的同时加速解码速度。得益于这些策略,VALL-E R获得了对音素的可控性,并通过接近真实语音的词错误率(WER)展现了其强大的鲁棒性。此外,它所需的自回归步骤更少,在推理过程中时间减少了超过60%。这项研究具有应用于有意义项目的潜力,包括为失语症患者生成语音。音频样本将在以下网址提供:https://aka.ms/valler。