Flow-matching-based text-to-speech (TTS) models have shown high-quality speech synthesis. However, most current flow-matching-based TTS models still rely on reference transcripts corresponding to the audio prompt for synthesis. This dependency prevents cross-lingual voice cloning when audio prompt transcripts are unavailable, particularly for unseen languages. The key challenges for flow-matching-based TTS models to remove audio prompt transcripts are identifying word boundaries during training and determining appropriate duration during inference. In this paper, we introduce Cross-Lingual F5-TTS, a framework that enables cross-lingual voice cloning without audio prompt transcripts. Our method preprocesses audio prompts by forced alignment to obtain word boundaries, enabling direct synthesis from audio prompts while excluding transcripts during training. To address the duration modeling challenge, we train speaking rate predictors at different linguistic granularities to derive duration from speaker pace. Experiments show that our approach matches the performance of F5-TTS while enabling cross-lingual voice cloning.
翻译:基于流匹配的文本到语音(TTS)模型已展现出高质量的语音合成能力。然而,当前大多数基于流匹配的TTS模型在合成时仍依赖于与音频提示对应的参考文本。当音频提示的文本不可得时,这种依赖性阻碍了跨语言语音克隆的实现,尤其对于未见过的语言而言。对于基于流匹配的TTS模型,移除音频提示文本的关键挑战在于训练时识别词边界,以及在推理时确定合适的时长。本文提出跨语言F5-TTS框架,该框架能够在无需音频提示文本的情况下实现跨语言语音克隆。我们的方法通过强制对齐对音频提示进行预处理以获取词边界,从而在训练时排除文本,实现直接从音频提示进行合成。为解决时长建模的挑战,我们在不同语言学粒度上训练语速预测器,以从说话者语速推导出时长。实验表明,我们的方法在实现跨语言语音克隆的同时,达到了与F5-TTS相当的性能。