The language model (LM) approach based on acoustic and linguistic prompts, such as VALL-E, has achieved remarkable progress in the field of zero-shot audio generation. However, existing methods still have some limitations: 1) repetitions, transpositions, and omissions in the output synthesized speech due to limited alignment constraints between audio and phoneme tokens; 2) challenges of fine-grained control over the synthesized speech with autoregressive (AR) language model; 3) infinite silence generation due to the nature of AR-based decoding, especially under the greedy strategy. To alleviate these issues, we propose ELLA-V, a simple but efficient LM-based zero-shot text-to-speech (TTS) framework, which enables fine-grained control over synthesized audio at the phoneme level. The key to ELLA-V is interleaving sequences of acoustic and phoneme tokens, where phoneme tokens appear ahead of the corresponding acoustic tokens. The experimental findings reveal that our model outperforms VALL-E in terms of accuracy and delivers more stable results using both greedy and sampling-based decoding strategies. The code of ELLA-V will be open-sourced after cleanups. Audio samples are available at https://ereboas.github.io/ELLAV/.
翻译:基于声学与语言提示的语言模型方法(如VALL-E)在零样本音频生成领域取得了显著进展。然而,现有方法仍存在以下局限:1)由于音频与音素标记之间的对齐约束不足,合成语音中会出现重复、置换和缺失现象;2)自回归语言模型难以实现合成语音的细粒度控制;3)在基于自回归的解码过程中(尤其在贪婪策略下),会产生无限静音序列。为解决上述问题,我们提出ELLA-V——一种简洁高效的基于语言模型的零样本文本转语音框架,能够在音素级别实现合成音频的细粒度控制。该框架的核心在于将音频与音素标记序列交错排列,使音素标记优先出现在对应音频标记之前。实验结果表明,无论采用贪婪解码策略还是基于采样的解码策略,本模型在合成准确性上均优于VALL-E,且输出结果更为稳定。ELLA-V的代码将在整理后开源。音频样本详见https://ereboas.github.io/ELLAV/。