Fine-grained control over voice impressions (e.g., making a voice brighter or calmer) is a key frontier for creating more controllable text-to-speech. However, this nascent field faces two key challenges. The first is the problem of impression leakage, where the synthesized voice is undesirably influenced by the speaker's reference audio, rather than the separately specified target impression, and the second is the lack of a public, annotated corpus. To mitigate impression leakage, we propose two methods: 1) a training strategy that separately uses an utterance for speaker identity and another utterance of the same speaker for target impression, and 2) a novel reference-free model that generates a speaker embedding solely from the target impression, achieving the benefits of improved robustness against the leakage and the convenience of reference-free generation. Objective and subjective evaluations demonstrate a significant improvement in controllability. Our best method reduced the mean squared error of 11-dimensional voice impression vectors from 0.61 to 0.41 objectively and from 1.15 to 0.92 subjectively, while maintaining high fidelity. To foster reproducible research, we introduce LibriTTS-VI, the first public voice impression dataset released with clear annotation standards, built upon the LibriTTS-R corpus.
翻译:对语音印象(例如使语音更明亮或更平静)的细粒度控制是创建更可控文本到语音系统的关键前沿领域。然而,这一新兴领域面临两个主要挑战。其一是印象泄漏问题,即合成语音受到说话人参考音频的不良影响,而非独立指定的目标印象;其二是缺乏公开的、带标注的语料库。为缓解印象泄漏,我们提出了两种方法:1)一种训练策略,分别使用同一说话人的一个话语用于说话人身份,另一个话语用于目标印象;2)一种新颖的无参考模型,该模型仅从目标印象生成说话人嵌入,从而实现了对泄漏鲁棒性的提升以及无参考生成的便利性。客观和主观评估均表明可控性得到显著改善。我们最佳方法将11维语音印象向量的均方误差从客观0.61降至0.41、主观1.15降至0.92,同时保持了高保真度。为促进可复现研究,我们基于LibriTTS-R语料库构建并发布了首个具有清晰标注标准的公开语音印象数据集LibriTTS-VI。