The generalization of Fake Audio Detection (FAD) is critical due to the emergence of new spoofing techniques. Traditional FAD methods often focus solely on distinguishing between genuine and known spoofed audio. We propose a Genuine-Focused Learning (GFL) framework guided, aiming for highly generalized FAD, called GFL-FAD. This method incorporates a Counterfactual Reasoning Enhanced Representation (CRER) based on audio reconstruction using the Mask AutoEncoder (MAE) architecture to accurately model genuine audio features. To reduce the influence of spoofed audio during training, we introduce a genuine audio reconstruction loss, maintaining the focus on learning genuine data features. In addition, content-related bottleneck (BN) features are extracted from the MAE to supplement the knowledge of the original audio. These BN features are adaptively fused with CRER to further improve robustness. Our method achieves state-of-the-art performance with an EER of 0.25% on ASVspoof2019 LA.
翻译:伪造音频检测的泛化能力对于应对新型伪造技术至关重要。传统方法通常仅关注区分真实音频与已知伪造音频。本文提出一种真实音频聚焦学习框架,旨在实现高度泛化的伪造音频检测,称为GFL-FAD。该方法采用基于掩码自编码器音频重构的反事实推理增强表征,以精确建模真实音频特征。为降低训练中伪造音频的影响,我们引入真实音频重构损失函数,保持对真实数据特征的学习聚焦。此外,从MAE中提取内容相关的瓶颈特征作为原始音频知识的补充,这些BN特征与CRER进行自适应融合以进一步提升鲁棒性。本方法在ASVspoof2019 LA数据集上实现了等错误率0.25%的最优性能。