Presentation attacks represent a critical security threat where adversaries use fake biometric data, such as face, fingerprint, or iris images, to gain unauthorized access to protected systems. Various presentation attack detection (PAD) systems have been designed leveraging deep learning (DL) models to mitigate this type of threat. Despite their effectiveness, most of the DL models function as black boxes - their decisions are opaque to their users. The purpose of explainability techniques is to provide detailed information about the reason behind the behavior or decision of DL models. In particular, visual explanation is necessary to better understand the decisions or predictions of DL-based PAD systems and determine the key regions due to which a biometric image is considered real or fake by the system. In this work, a novel technique, Ensemble-CAM, is proposed for providing visual explanations for the decisions made by deep learning-based face PAD systems. Our goal is to improve DL-based face PAD systems by providing a better understanding of their behavior. Our provided visual explanations will enhance the transparency and trustworthiness of DL-based face PAD systems.
翻译:呈现攻击是一种严重的安全威胁,攻击者使用伪造的生物特征数据(如人脸、指纹或虹膜图像)来未经授权访问受保护系统。各种呈现攻击检测系统已利用深度学习模型进行设计,以缓解此类威胁。尽管这些模型效果显著,但大多数深度学习模型如同黑盒——其决策对用户而言是不透明的。可解释性技术的目的是提供关于深度学习模型行为或决策背后原因的详细信息。特别是,视觉解释对于更好地理解基于深度学习的PAD系统的决策或预测,并确定系统将生物特征图像判定为真实或伪造的关键区域至关重要。在本研究中,我们提出了一种名为Ensemble-CAM的新技术,用于为基于深度学习的人脸PAD系统所做的决策提供视觉解释。我们的目标是通过更好地理解其行为来改进基于深度学习的人脸PAD系统。我们提供的视觉解释将增强基于深度学习的人脸PAD系统的透明度和可信度。