Transformer-based models have shown strong performance in speech deepfake detection, largely due to the effectiveness of the multi-head self-attention (MHSA) mechanism. MHSA provides frame-level attention scores, which are particularly valuable because deepfake artifacts often occur in small, localized regions along the temporal dimension of speech. This makes fine-grained frame modeling essential for accurately detecting subtle spoofing cues. In this work, we propose fine-grained frame modeling (FGFM) for MHSA-based speech deepfake detection, where the most informative frames are first selected through a multi-head voting (MHV) module. These selected frames are then refined via a cross-layer refinement (CLR) module to enhance the model's ability to learn subtle spoofing cues. Experimental results demonstrate that our method outperforms the baseline model and achieves Equal Error Rate (EER) of 0.90%, 1.88%, and 6.64% on the LA21, DF21, and ITW datasets, respectively. These consistent improvements across multiple benchmarks highlight the effectiveness of our fine-grained modeling for robust speech deepfake detection.
翻译:基于Transformer的模型在语音深度伪造检测中表现出色,这主要归功于多头自注意力(MHSA)机制的有效性。MHSA提供了帧级别的注意力分数,这一点尤其有价值,因为深度伪造伪影通常出现在语音时间维度上小而局部的区域。这使得细粒度帧建模对于准确检测细微的伪造线索至关重要。在本工作中,我们为基于MHSA的语音深度伪造检测提出了细粒度帧建模(FGFM)方法,其中最具信息量的帧首先通过多头投票(MHV)模块进行选择。随后,这些选定的帧通过跨层精炼(CLR)模块进行优化,以增强模型学习细微伪造线索的能力。实验结果表明,我们的方法优于基线模型,并在LA21、DF21和ITW数据集上分别实现了0.90%、1.88%和6.64%的等错误率(EER)。这些在多个基准测试上的一致改进凸显了我们细粒度建模方法对于鲁棒语音深度伪造检测的有效性。