Face Recognition (FR) has advanced significantly with the development of deep learning, achieving high accuracy in several applications. However, the lack of interpretability of these systems raises concerns about their accountability, fairness, and reliability. In the present study, we propose an interactive framework to enhance the explainability of FR models by combining model-agnostic Explainable Artificial Intelligence (XAI) and Natural Language Processing (NLP) techniques. The proposed framework is able to accurately answer various questions of the user through an interactive chatbot. In particular, the explanations generated by our proposed method are in the form of natural language text and visual representations, which for example can describe how different facial regions contribute to the similarity measure between two faces. This is achieved through the automatic analysis of the output's saliency heatmaps of the face images and a BERT question-answering model, providing users with an interface that facilitates a comprehensive understanding of the FR decisions. The proposed approach is interactive, allowing the users to ask questions to get more precise information based on the user's background knowledge. More importantly, in contrast to previous studies, our solution does not decrease the face recognition performance. We demonstrate the effectiveness of the method through different experiments, highlighting its potential to make FR systems more interpretable and user-friendly, especially in sensitive applications where decision-making transparency is crucial.
翻译:随着深度学习的发展,人脸识别技术取得了显著进步,在多个应用场景中实现了高精度识别。然而,这些系统缺乏可解释性,引发了对其问责性、公平性和可靠性的担忧。本研究提出一种交互式框架,通过结合模型无关的可解释人工智能技术和自然语言处理方法,增强人脸识别模型的可解释性。该框架能够通过交互式聊天机器人准确回应用户提出的各类问题。特别地,我们提出的方法生成的解释采用自然语言文本和可视化表征的形式,例如可以描述不同面部区域如何影响两张人脸相似度度量的计算。这一目标是通过自动分析人脸图像输出的显著性热图,并结合BERT问答模型实现的,为用户提供了一个促进全面理解人脸识别决策的交互界面。所提出的方法具有交互特性,允许用户基于自身背景知识提出问题以获取更精确的信息。更重要的是,与以往研究相比,我们的解决方案不会降低人脸识别的性能。我们通过多项实验验证了该方法的有效性,突显了其在提升人脸识别系统可解释性与用户友好性方面的潜力,特别是在决策透明度至关重要的敏感应用场景中。