In this paper, we propose a deep learning based system for the task of deepfake audio detection. In particular, the draw input audio is first transformed into various spectrograms using three transformation methods of Short-time Fourier Transform (STFT), Constant-Q Transform (CQT), Wavelet Transform (WT) combined with different auditory-based filters of Mel, Gammatone, linear filters (LF), and discrete cosine transform (DCT). Given the spectrograms, we evaluate a wide range of classification models based on three deep learning approaches. The first approach is to train directly the spectrograms using our proposed baseline models of CNN-based model (CNN-baseline), RNN-based model (RNN-baseline), C-RNN model (C-RNN baseline). Meanwhile, the second approach is transfer learning from computer vision models such as ResNet-18, MobileNet-V3, EfficientNet-B0, DenseNet-121, SuffleNet-V2, Swint, Convnext-Tiny, GoogLeNet, MNASsnet, RegNet. In the third approach, we leverage the state-of-the-art audio pre-trained models of Whisper, Seamless, Speechbrain, and Pyannote to extract audio embeddings from the input spectrograms. Then, the audio embeddings are explored by a Multilayer perceptron (MLP) model to detect the fake or real audio samples. Finally, high-performance deep learning models from these approaches are fused to achieve the best performance. We evaluated our proposed models on ASVspoof 2019 benchmark dataset. Our best ensemble model achieved an Equal Error Rate (EER) of 0.03, which is highly competitive to top-performing systems in the ASVspoofing 2019 challenge. Experimental results also highlight the potential of selective spectrograms and deep learning approaches to enhance the task of audio deepfake detection.
翻译:本文提出了一种基于深度学习的深度伪造音频检测系统。具体而言,首先将原始输入音频通过短时傅里叶变换(STFT)、常数Q变换(CQT)和小波变换(WT)三种变换方法,结合梅尔(Mel)、伽马通(Gammatone)、线性滤波器(LF)和离散余弦变换(DCT)等不同听觉相关滤波器,转换为多种频谱图。基于这些频谱图,我们评估了基于三种深度学习方法的广泛分类模型。第一种方法是直接使用我们提出的基线模型对频谱图进行训练,包括基于CNN的模型(CNN-baseline)、基于RNN的模型(RNN-baseline)和C-RNN模型(C-RNN baseline)。第二种方法采用计算机视觉模型的迁移学习,例如ResNet-18、MobileNet-V3、EfficientNet-B0、DenseNet-121、ShuffleNet-V2、SwinT、ConvNeXt-Tiny、GoogLeNet、MNASNet、RegNet。第三种方法利用最先进的音频预训练模型(Whisper、Seamless、Speechbrain、Pyannote)从输入频谱图中提取音频嵌入表示,随后通过多层感知机(MLP)模型对这些音频嵌入进行探索,以鉴别伪造或真实的音频样本。最后,将这些方法中表现优异的深度学习模型进行融合,以达到最佳性能。我们在ASVspoof 2019基准数据集上评估了所提出的模型。我们最优的集成模型实现了0.03的等错误率(EER),与ASVspoof 2019挑战赛中表现最佳的系统相比具有高度竞争力。实验结果也凸显了选择性频谱图与深度学习方法在增强音频深度伪造检测任务中的潜力。