Although numerous recent studies have suggested new frameworks for zero-shot TTS using large-scale, real-world data, studies that focus on the intelligibility of zero-shot TTS are relatively scarce. Zero-shot TTS demands additional efforts to ensure clear pronunciation and speech quality due to its inherent requirement of replacing a core parameter (speaker embedding or acoustic prompt) with a new one at the inference stage. In this study, we propose a zero-shot TTS model focused on intelligibility, which we refer to as Intelli-Z. Intelli-Z learns speaker embeddings by using multi-speaker TTS as its teacher and is trained with a cycle-consistency loss to include mismatched text-speech pairs for training. Additionally, it selectively aggregates speaker embeddings along the temporal dimension to minimize the interference of the text content of reference speech at the inference stage. We substantiate the effectiveness of the proposed methods with an ablation study. The Mean Opinion Score (MOS) increases by 9% for unseen speakers when the first two methods are ap- plied, and it further improves by 16% when selective temporal aggregation is applied.
翻译:尽管近期大量研究利用大规模真实世界数据提出了零样本语音合成的新框架,但聚焦于零样本语音合成可理解性的研究仍相对匮乏。由于零样本语音合成在推理阶段需将核心参数(说话人嵌入或声学提示)替换为新参数,这必然要求额外确保发音清晰度与语音质量。本研究提出了一种面向可理解性的零样本语音合成模型Intelli-Z。该模型通过多说话人语音合成作为教师模型来学习说话人嵌入,并采用循环一致性损失函数进行训练,从而将文本-语音错配样本纳入训练过程。此外,Intelli-Z沿时间维度选择性聚合说话人嵌入,以最小化推理阶段参考语音文本内容对合成结果的干扰。我们通过消融实验验证了所提方法的有效性:当应用前两种方法时,未见说话人的平均意见得分(MOS)提升9%;进一步应用选择性时间聚合后,得分再提升16%。