Face recognition technologies are increasingly used in various applications, yet they are vulnerable to face spoofing attacks. These spoofing attacks often involve unique 3D structures, such as printed papers or mobile device screens. Although stereo-depth cameras can detect such attacks effectively, their high-cost limits their widespread adoption. Conversely, two-sensor systems without extrinsic calibration offer a cost-effective alternative but are unable to calculate depth using stereo techniques. In this work, we propose a method to overcome this challenge by leveraging facial attributes to derive disparity information and estimate relative depth for anti-spoofing purposes, using non-calibrated systems. We introduce a multi-modal anti-spoofing model, coined Disparity Model, that incorporates created disparity maps as a third modality alongside the two original sensor modalities. We demonstrate the effectiveness of the Disparity Model in countering various spoof attacks using a comprehensive dataset collected from the Intel RealSense ID Solution F455. Our method outperformed existing methods in the literature, achieving an Equal Error Rate (EER) of 1.71% and a False Negative Rate (FNR) of 2.77% at a False Positive Rate (FPR) of 1%. These errors are lower by 2.45% and 7.94% than the errors of the best comparison method, respectively. Additionally, we introduce a model ensemble that addresses 3D spoof attacks as well, achieving an EER of 2.04% and an FNR of 3.83% at an FPR of 1%. Overall, our work provides a state-of-the-art solution for the challenging task of anti-spoofing in non-calibrated systems that lack depth information.
翻译:人脸识别技术日益广泛应用于各种场景,但其易受人脸欺骗攻击的影响。此类欺骗攻击通常涉及独特的3D结构,例如打印纸张或移动设备屏幕。尽管立体深度相机能有效检测此类攻击,但其高昂成本限制了广泛部署。相反,未经过外部标定的双传感器系统提供了经济高效的替代方案,但无法使用立体视觉技术计算深度。本研究提出一种方法,通过利用面部属性推导视差信息并估算相对深度,以在非标定系统中实现活体检测,从而克服这一挑战。我们提出一种多模态活体检测模型,称为视差模型,该模型将生成的视差图作为第三模态与原始两个传感器模态相结合。我们使用从Intel RealSense ID Solution F455采集的综合数据集,验证了视差模型在抵御各类欺骗攻击方面的有效性。本方法在性能上超越了现有文献方法,在误识率(FPR)为1%时实现了等错误率(EER)1.71%与误拒率(FNR)2.77%,这两项误差分别比最佳对比方法降低了2.45%和7.94%。此外,我们提出一种模型集成方法,该方法同样能应对3D欺骗攻击,在FPR为1%时实现了EER 2.04%与FNR 3.83%。总体而言,本研究为缺乏深度信息的非标定系统中的活体检测这一挑战性任务提供了先进的解决方案。