Audio deepfake detection (ADD) is crucial to combat the misuse of speech synthesized from generative AI models. Existing ADD models suffer from generalization issues, with a large performance discrepancy between in-domain and out-of-domain data. Moreover, the black-box nature of existing models limits their use in real-world scenarios, where explanations are required for model decisions. To alleviate these issues, we introduce a new ADD model that explicitly uses the StyleLInguistics Mismatch (SLIM) in fake speech to separate them from real speech. SLIM first employs self-supervised pretraining on only real samples to learn the style-linguistics dependency in the real class. The learned features are then used in complement with standard pretrained acoustic features (e.g., Wav2vec) to learn a classifier on the real and fake classes. When the feature encoders are frozen, SLIM outperforms benchmark methods on out-of-domain datasets while achieving competitive results on in-domain data. The features learned by SLIM allow us to quantify the (mis)match between style and linguistic content in a sample, hence facilitating an explanation of the model decision.
翻译:音频深度伪造检测(ADD)对于对抗生成式AI模型合成的语音滥用至关重要。现有的ADD模型存在泛化问题,在域内和域外数据上表现出巨大的性能差异。此外,现有模型的黑箱性质限制了其在需要解释模型决策的现实场景中的应用。为缓解这些问题,我们提出了一种新的ADD模型,该模型显式利用伪造语音中的风格-语言学失配(SLIM)来将其与真实语音区分开。SLIM首先仅对真实样本进行自监督预训练,以学习真实类别中的风格-语言学依赖关系。随后,将学习到的特征与标准的预训练声学特征(如Wav2vec)互补使用,以在真实和伪造类别上训练分类器。当特征编码器被冻结时,SLIM在域外数据集上超越了基准方法,同时在域内数据上取得了具有竞争力的结果。SLIM学习到的特征使我们能够量化样本中风格与语言学内容之间的(失)匹配程度,从而为模型决策提供解释。