Current text-to-speech algorithms produce realistic fakes of human voices, making deepfake detection a much-needed area of research. While researchers have presented various techniques for detecting audio spoofs, it is often unclear exactly why these architectures are successful: Preprocessing steps, hyperparameter settings, and the degree of fine-tuning are not consistent across related work. Which factors contribute to success, and which are accidental? In this work, we address this problem: We systematize audio spoofing detection by re-implementing and uniformly evaluating architectures from related work. We identify overarching features for successful audio deepfake detection, such as using cqtspec or logspec features instead of melspec features, which improves performance by 37% EER on average, all other factors constant. Additionally, we evaluate generalization capabilities: We collect and publish a new dataset consisting of 37.9 hours of found audio recordings of celebrities and politicians, of which 17.2 hours are deepfakes. We find that related work performs poorly on such real-world data (performance degradation of up to one thousand percent). This may suggest that the community has tailored its solutions too closely to the prevailing ASVSpoof benchmark and that deepfakes are much harder to detect outside the lab than previously thought.
翻译:当前文本转语音算法能够生成高度逼真的人声伪造音频,使得深度伪造检测成为一个亟需研究的领域。尽管研究者已提出多种音频欺骗检测技术,但这些架构成功的确切原因往往不明晰:相关工作中的预处理步骤、超参数设置及微调程度均不一致。哪些因素促成了成功,哪些又是偶然因素?本研究针对此问题展开:通过重新实现并统一评估相关工作中的架构,我们系统化了音频欺骗检测研究。我们识别了成功进行音频深度伪造检测的关键特征,例如使用cqtspec或logspec特征替代melspec特征——在其他因素保持不变的情况下,平均可将等错误率提升37%。此外,我们评估了模型的泛化能力:收集并发布了一个包含37.9小时名人政客真实录音的新数据集,其中17.2小时为深度伪造音频。研究发现,现有方法在此类真实数据上表现显著下降(性能退化最高达百分之一千)。这可能表明该领域的研究方案过于贴合当前主流的ASVSpoof基准测试,而深度伪造在实际场景中的检测难度远超实验室预期。