In computer vision, explainable AI (xAI) methods seek to mitigate the 'black-box' problem by making the decision-making process of deep learning models more interpretable and transparent. Traditional xAI methods concentrate on visualizing input features that influence model predictions, providing insights primarily suited for experts. In this work, we present an interaction-based xAI method that enhances user comprehension of image classification models through their interaction. Thus, we developed a web-based prototype allowing users to modify images via painting and erasing, thereby observing changes in classification results. Our approach enables users to discern critical features influencing the model's decision-making process, aligning their mental models with the model's logic. Experiments conducted with five images demonstrate the potential of the method to reveal feature importance through user interaction. Our work contributes a novel perspective to xAI by centering on end-user engagement and understanding, paving the way for more intuitive and accessible explainability in AI systems.
翻译:在计算机视觉领域,可解释人工智能(xAI)方法旨在通过使深度学习模型的决策过程更具可解释性和透明度,缓解“黑箱”问题。传统xAI方法专注于可视化影响模型预测的输入特征,提供的见解主要适用于专家。本文提出一种基于交互的xAI方法,通过用户与模型的交互增强其对图像分类模型的理解。为此,我们开发了一个基于网络的原型系统,允许用户通过绘画和擦除操作修改图像,从而观察分类结果的变化。该方法使用户能够识别影响模型决策过程的关键特征,使其心智模型与模型逻辑保持一致。针对五张图像的实验表明,该方法具有通过用户交互揭示特征重要性的潜力。本研究通过聚焦终端用户参与和理解,为xAI贡献了全新视角,为在人工智能系统中实现更直观、更易理解的可解释性铺平了道路。