We address zero-shot TTS systems' noise-robustness problem by proposing a dual-objective training for the speaker encoder using self-supervised DINO loss. This approach enhances the speaker encoder with the speech synthesis objective, capturing a wider range of speech characteristics beneficial for voice cloning. At the same time, the DINO objective improves speaker representation learning, ensuring robustness to noise and speaker discriminability. Experiments demonstrate significant improvements in subjective metrics under both clean and noisy conditions, outperforming traditional speaker-encoderbased TTS systems. Additionally, we explore training zeroshot TTS on noisy, unlabeled data. Our two-stage training strategy, leveraging self-supervised speech models to distinguish between noisy and clean speech, shows notable advances in similarity and naturalness, especially with noisy training datasets, compared to the ASR-transcription-based approach.
翻译:针对零样本文本转语音系统的噪声鲁棒性问题,我们提出了一种采用自监督DINO损失的双目标说话人编码器训练方法。该方案通过语音合成目标增强说话人编码器,使其能够捕获更广泛的语音特征,从而提升声音克隆效果。同时,DINO目标改进了说话人表征学习,确保了对噪声的鲁棒性和说话人区分能力。实验结果表明,在纯净与噪声条件下主观评价指标均取得显著提升,性能优于传统的基于说话人编码器的文本转语音系统。此外,我们探索了在带噪无标注数据上训练零样本文本转语音的方法。相较于基于自动语音识别转录的方案,我们提出的两阶段训练策略利用自监督语音模型区分带噪与纯净语音,在相似度与自然度方面——特别是在带噪训练数据集上——展现出显著优势。