Generated speech achieves human-level naturalness but escalates security risks of misuse. However, existing watermarking methods fail to reconcile fidelity with robustness, as they rely either on simple superposition in the noise space or on intrusive alterations to model weights. To bridge this gap, we propose VocBulwark, an additional-parameter injection framework that freezes generative model parameters to preserve perceptual quality. Specifically, we design a Temporal Adapter to deeply entangle watermarks with acoustic attributes, synergizing with a Coarse-to-Fine Gated Extractor to resist advanced attacks. Furthermore, we develop an Accuracy-Guided Optimization Curriculum that dynamically orchestrates gradient flow to resolve the optimization conflict between fidelity and robustness. Comprehensive experiments demonstrate that VocBulwark achieves high-capacity and high-fidelity watermarking, offering robust defense against complex practical scenarios, with resilience to Codec regenerations and variable-length manipulations.
翻译:生成语音已达到人类水平的自然度,但也加剧了滥用的安全风险。然而,现有的水印方法难以兼顾保真度与鲁棒性,因为它们要么依赖于噪声空间的简单叠加,要么需要对模型权重进行侵入式修改。为弥补这一差距,我们提出了VocBulwark,一种附加参数注入框架,通过冻结生成模型参数以保持感知质量。具体而言,我们设计了一个时序适配器,将水印与声学属性深度纠缠,并与一个从粗到细的门控提取器协同工作,以抵御高级攻击。此外,我们开发了一种精度引导的优化课程,动态协调梯度流,以解决保真度与鲁棒性之间的优化冲突。综合实验表明,VocBulwark实现了高容量、高保真的水印嵌入,为复杂的实际场景提供了鲁棒的防御,能够抵抗编解码器再生和变长操作。