Explaining the decisions made by audio spoofing detection models is crucial for fostering trust in detection outcomes. However, current research on the interpretability of detection models is limited to applying XAI tools to post-trained models. In this paper, we utilize the wav2vec 2.0 model and attentive utterance-level features to integrate interpretability directly into the model's architecture, thereby enhancing transparency of the decision-making process. Specifically, we propose a class activation representation to localize the discriminative frames contributing to detection. Furthermore, we demonstrate that multi-label training based on spoofing types, rather than binary labels as bonafide and spoofed, enables the model to learn distinct characteristics of different attacks, significantly improving detection performance. Our model achieves state-of-the-art results, with an EER of 0.51% and a min t-DCF of 0.0165 on the ASVspoof2019-LA set.
翻译:解释音频伪造检测模型的决策依据对于建立检测结果的可信度至关重要。然而,当前关于检测模型可解释性的研究仅限于将XAI工具应用于训练后的模型。本文利用wav2vec 2.0模型和注意力话语级特征,将可解释性直接集成到模型架构中,从而增强决策过程的透明度。具体而言,我们提出了一种类别激活表示方法,用于定位对检测有贡献的判别性帧。此外,我们证明了基于伪造类型(而非真实与伪造的二元标签)进行多标签训练,能使模型学习不同攻击的独特特征,从而显著提升检测性能。我们的模型在ASVspoof2019-LA数据集上取得了最先进的结果,等错误率(EER)为0.51%,最小t-DCF为0.0165。