While recent zero-shot multispeaker text-to-speech (TTS) models achieve impressive results, they typically rely on extensive transcribed speech datasets from numerous speakers and intricate training pipelines. Meanwhile, self-supervised learning (SSL) speech features have emerged as effective intermediate representations for TTS. It was also observed that SSL features from different speakers that are linearly close share phonetic information while maintaining individual speaker identity, which enables straight-forward and robust voice cloning. In this study, we introduce SSL-TTS, a lightweight and efficient zero-shot TTS framework trained on transcribed speech from a single speaker. SSL-TTS leverages SSL features and retrieval methods for simple and robust zero-shot multi-speaker synthesis. Objective and subjective evaluations show that our approach achieves performance comparable to state-of-the-art models that require significantly larger training datasets. The low training data requirements mean that SSL-TTS is well suited for the development of multi-speaker TTS systems for low-resource domains and languages. We also introduce an interpolation parameter which enables fine control over the output speech by blending voices. Demo samples are available at https://idiap.github.io/ssl-tts
翻译:尽管近期的零样本多说话人文本转语音(TTS)模型取得了令人瞩目的成果,但它们通常依赖于大量说话人的转录语音数据集以及复杂的训练流程。与此同时,自监督学习(SSL)语音特征已成为TTS中有效的中间表示形式。研究还发现,来自不同说话人且线性空间接近的SSL特征能够共享音素信息,同时保持个体说话人身份,这为实现简单而鲁棒的语音克隆提供了可能。本研究提出SSL-TTS——一个基于单说话人转录语音训练的轻量高效零样本TTS框架。SSL-TTS利用SSL特征与检索方法,实现简单且鲁棒的零样本多说话人语音合成。客观与主观评估表明,我们的方法在性能上可与需要大量训练数据的最先进模型相媲美。较低的训练数据需求意味着SSL-TTS非常适合为资源稀缺的领域和语言开发多说话人TTS系统。我们还引入了一个插值参数,通过混合不同音色实现对输出语音的精细控制。演示样本可在 https://idiap.github.io/ssl-tts 获取。