Face presentation attacks (FPA), also known as face spoofing, have brought increasing concerns to the public through various malicious applications, such as financial fraud and privacy leakage. Therefore, safeguarding face recognition systems against FPA is of utmost importance. Although existing learning-based face anti-spoofing (FAS) models can achieve outstanding detection performance, they lack generalization capability and suffer significant performance drops in unforeseen environments. Many methodologies seek to use auxiliary modality data (e.g., depth and infrared maps) during the presentation attack detection (PAD) to address this limitation. However, these methods can be limited since (1) they require specific sensors such as depth and infrared cameras for data capture, which are rarely available on commodity mobile devices, and (2) they cannot work properly in practical scenarios when either modality is missing or of poor quality. In this paper, we devise an accurate and robust MultiModal Mobile Face Anti-Spoofing system named M3FAS to overcome the issues above. The primary innovation of this work lies in the following aspects: (1) To achieve robust PAD, our system combines visual and auditory modalities using three commonly available sensors: camera, speaker, and microphone; (2) We design a novel two-branch neural network with three hierarchical feature aggregation modules to perform cross-modal feature fusion; (3). We propose a multi-head training strategy, allowing the model to output predictions from the vision, acoustic, and fusion heads, resulting in a more flexible PAD. Extensive experiments have demonstrated the accuracy, robustness, and flexibility of M3FAS under various challenging experimental settings. The source code and dataset are available at: https://github.com/ChenqiKONG/M3FAS/
翻译:人脸呈现攻击(FPA),即人脸欺骗,已通过金融欺诈和隐私泄露等各类恶意应用引起公众日益增长的担忧。因此,保护人脸识别系统免受FPA攻击至关重要。尽管现有基于学习的人脸防欺骗(FAS)模型能够实现出色的检测性能,但它们缺乏泛化能力,在未知环境下性能显著下降。许多方法试图通过利用辅助模态数据(如深度图和红外图)进行呈现攻击检测(PAD)来克服这一局限性。然而,这些方法存在局限:(1)需要深度摄像头和红外摄像头等特定传感器进行数据采集,而这些传感器在商用移动设备上并不常见;(2)在任意模态缺失或质量较差的实际场景中无法正常工作。本文设计了一种名为M3FAS的准确且鲁棒的多模态移动人脸防欺骗系统以解决上述问题。本研究的主要创新点如下:(1)为实现鲁棒的PAD,本系统利用摄像头、扬声器和麦克风三种常见传感器,结合视觉与听觉模态;(2)我们设计了一个新颖的双分支神经网络,包含三个层级特征聚合模块,用于执行跨模态特征融合;(3)提出一种多头训练策略,使得模型可从视觉、声学及融合头输出预测结果,从而实现更灵活高效的PAD。大量实验证明了M3FAS在各种具有挑战性的实验设置下的准确性、鲁棒性和灵活性。源代码与数据集可在 https://github.com/ChenqiKONG/M3FAS/ 获取。