Transformers and their variants have achieved great success in speech processing. However, their multi-head self-attention mechanism is computationally expensive. Therefore, one novel selective state space model, Mamba, has been proposed as an alternative. Building on its success in automatic speech recognition, we apply Mamba for spoofing attack detection. Mamba is well-suited for this task as it can capture the artifacts in spoofed speech signals by handling long-length sequences. However, Mamba's performance may suffer when it is trained with limited labeled data. To mitigate this, we propose combining a new structure of Mamba based on a dual-column architecture with self-supervised learning, using the pre-trained wav2vec 2.0 model. The experiments show that our proposed approach achieves competitive results and faster inference on the ASVspoof 2021 LA and DF datasets, and on the more challenging In-the-Wild dataset, it emerges as the strongest candidate for spoofing attack detection. The code will be publicly released in due course.
翻译:Transformer及其变体在语音处理领域取得了巨大成功。然而,其多头自注意力机制计算成本高昂。因此,一种新颖的选择性状态空间模型Mamba被提出作为替代方案。基于其在自动语音识别中的成功,我们将Mamba应用于欺骗攻击检测。Mamba非常适合此任务,因为它能通过处理长序列来捕捉伪造语音信号中的伪影。然而,当使用有限标注数据进行训练时,Mamba的性能可能会受到影响。为缓解此问题,我们提出将一种基于双列架构的Mamba新结构与自监督学习相结合,并利用预训练的wav2vec 2.0模型。实验表明,我们提出的方法在ASVspoof 2021 LA和DF数据集上取得了有竞争力的结果和更快的推理速度,在更具挑战性的In-the-Wild数据集上,它成为欺骗攻击检测的最有力候选方案。代码将在适当时候公开发布。