Various threats posed by the progress in text-to-speech (TTS) have prompted the need to reliably trace synthesized speech. However, contemporary approaches to this task involve adding watermarks to the audio separately after generation, a process that hurts both speech quality and watermark imperceptibility. In addition, these approaches are limited in robustness and flexibility. To address these problems, we propose TraceableSpeech, a novel TTS model that directly generates watermarked speech, improving watermark imperceptibility and speech quality. Furthermore, We design the frame-wise imprinting and extraction of watermarks, achieving higher robustness against resplicing attacks and temporal flexibility in operation. Experimental results show that TraceableSpeech outperforms the strong baseline where VALL-E or HiFicodec individually uses WavMark in watermark imperceptibility, speech quality and resilience against resplicing attacks. It also can apply to speech of various durations.
翻译:随着文本转语音(TTS)技术的进步,其带来的多种潜在威胁促使对合成语音进行可靠溯源的需求日益迫切。然而,当前该任务的实现方法通常在音频生成后单独添加水印,这一过程会同时损害语音质量与水印的不可感知性。此外,这些方法在鲁棒性和灵活性方面也存在局限。为解决这些问题,我们提出了TraceableSpeech,一种新颖的TTS模型,它能够直接生成带水印的语音,从而提升水印的不可感知性与语音质量。此外,我们设计了逐帧的水印嵌入与提取机制,实现了对重拼接攻击更强的鲁棒性以及操作上的时间灵活性。实验结果表明,在VALL-E或HiFicodec单独使用WavMark的强基线对比下,TraceableSpeech在水印不可感知性、语音质量以及对重拼接攻击的抵抗力方面均表现更优。该模型还可适用于不同时长的语音。