The rapid advancement of audio generation technologies has escalated the risks of malicious deepfake audio across speech, sound, singing voice, and music, threatening multimedia security and trust. While existing countermeasures (CMs) perform well in single-type audio deepfake detection (ADD), their performance declines in cross-type scenarios. This paper is dedicated to studying the alltype ADD task. We are the first to comprehensively establish an all-type ADD benchmark to evaluate current CMs, incorporating cross-type deepfake detection across speech, sound, singing voice, and music. Then, we introduce the prompt tuning self-supervised learning (PT-SSL) training paradigm, which optimizes SSL frontend by learning specialized prompt tokens for ADD, requiring 458x fewer trainable parameters than fine-tuning (FT). Considering the auditory perception of different audio types,we propose the wavelet prompt tuning (WPT)-SSL method to capture type-invariant auditory deepfake information from the frequency domain without requiring additional training parameters, thereby enhancing performance over FT in the all-type ADD task. To achieve an universally CM, we utilize all types of deepfake audio for co-training. Experimental results demonstrate that WPT-XLSR-AASIST achieved the best performance, with an average EER of 3.58% across all evaluation sets. The code is available online.
翻译:音频生成技术的快速发展加剧了恶意深度伪造音频在语音、声音、歌唱声和音乐领域的风险,威胁着多媒体安全与信任。虽然现有对抗措施在单一类型音频深度伪造检测中表现良好,但其在跨类型场景下的性能会下降。本文致力于研究全类型音频深度伪造检测任务。我们首次全面建立了一个全类型音频深度伪造检测基准,用以评估当前对抗措施,该基准涵盖了语音、声音、歌唱声和音乐的跨类型深度伪造检测。接着,我们引入了提示调优自监督学习训练范式,该范式通过学习针对音频深度伪造检测的专用提示令牌来优化自监督学习前端,其所需可训练参数比微调少458倍。考虑到不同音频类型的听觉感知特性,我们提出了小波提示调优-自监督学习方法,该方法无需额外训练参数即可从频域捕获类型不变的听觉深度伪造信息,从而在全类型音频深度伪造检测任务中实现了优于微调的性能。为实现通用的对抗措施,我们利用所有类型的深度伪造音频进行协同训练。实验结果表明,WPT-XLSR-AASIST取得了最佳性能,在所有评估集上的平均等错误率为3.58%。代码已在线公开。