Various threats posed by the progress in text-to-speech (TTS) have prompted the need to reliably trace synthesized speech. However, contemporary approaches to this task involve adding watermarks to the audio separately after generation, a process that hurts both speech quality and watermark imperceptibility. In addition, these approaches are limited in robustness and flexibility. To address these problems, we propose TraceableSpeech, a novel TTS model that directly generates watermarked speech, improving watermark imperceptibility and speech quality. Furthermore, We design the frame-wise imprinting and extraction of watermarks, achieving higher robustness against resplicing attacks and temporal flexibility in operation. Experimental results show that TraceableSpeech outperforms the strong baseline where VALL-E or HiFicodec individually uses WavMark in watermark imperceptibility, speech quality and resilience against resplicing attacks. It also can apply to speech of various durations.
翻译:文本转语音(TTS)技术的进步带来的各种威胁促使我们迫切需要可靠地追溯合成语音。然而,当前应对该任务的方法是在生成后单独向音频添加水印,这一过程会同时损害语音质量和水印的不可感知性。此外,这些方法在鲁棒性和灵活性方面存在局限。为解决上述问题,我们提出TraceableSpeech,这是一种新型TTS模型,可直接生成含水印语音,从而提升水印不可感知性与语音质量。进一步地,我们设计了帧级别的水印嵌入与提取机制,实现了对重拼接攻击的更高鲁棒性以及操作上的时间灵活性。实验结果表明,在VALL-E或HiFicodec分别使用WavMark的强基线方法中,TraceableSpeech在水印不可感知性、语音质量和抗重拼接攻击能力方面均表现更优,且能适用于不同时长的语音。