Audio deepfake detection has become increasingly challenging due to rapid advances in speech synthesis and voice conversion technologies, particularly under channel distortions, replay attacks, and real-world recording conditions. This paper proposes a resolution-aware audio deepfake detection framework that explicitly models and aligns multi-resolution spectral representations through cross-scale attention and consistency learning. Unlike conventional single-resolution or implicit feature-fusion approaches, the proposed method enforces agreement across complementary time--frequency scales. The proposed framework is evaluated on three representative benchmarks: ASVspoof 2019 (LA and PA), the Fake-or-Real (FoR) dataset, and the In-the-Wild Audio Deepfake dataset under a speaker-disjoint protocol. The method achieves near-perfect performance on ASVspoof LA (EER 0.16%), strong robustness on ASVspoof PA (EER 5.09%), FoR rerecorded audio (EER 4.54%), and in-the-wild deepfakes (AUC 0.98, EER 4.81%), significantly outperforming single-resolution and non-attention baselines under challenging conditions. The proposed model remains lightweight and efficient, requiring only 159k parameters and less than 1~GFLOP per inference, making it suitable for practical deployment. Comprehensive ablation studies confirm the critical contributions of cross-scale attention and consistency learning, while gradient-based interpretability analysis reveals that the model learns resolution-consistent and semantically meaningful spectral cues across diverse spoofing conditions. These results demonstrate that explicit cross-resolution modeling provides a principled, robust, and scalable foundation for next-generation audio deepfake detection systems.
翻译:随着语音合成与语音转换技术的快速发展,音频深度伪造检测在信道失真、重放攻击及真实世界录制条件下变得日益困难。本文提出一种分辨率感知的音频深度伪造检测框架,通过跨尺度注意力与一致性学习,显式建模并对齐多分辨率频谱表示。与传统的单分辨率或隐式特征融合方法不同,所提方法强制互补的时频尺度间达成一致。该框架在三个代表性基准上进行了评估:ASVspoof 2019(LA与PA)、Fake-or-Real(FoR)数据集以及采用说话人分离协议的In-the-Wild Audio Deepfake数据集。该方法在ASVspoof LA上实现了接近完美的性能(EER 0.16%),在ASVspoof PA(EER 5.09%)、FoR重录音频(EER 4.54%)和野外深度伪造音频(AUC 0.98,EER 4.81%)上均表现出强大的鲁棒性,在挑战性条件下显著优于单分辨率及无注意力基线模型。所提模型保持轻量化与高效性,仅需159k参数且每次推理计算量低于1~GFLOP,适合实际部署。全面的消融研究证实了跨尺度注意力与一致性学习的关键贡献,而基于梯度的可解释性分析表明,模型能够学习跨多种伪造条件的分辨率一致且具有语义意义的频谱线索。这些结果表明,显式的跨分辨率建模为下一代音频深度伪造检测系统提供了原则性、鲁棒且可扩展的基础。