Deep learning-based methods have achieved a breakthrough in image anomaly detection, but their complexity introduces a considerable challenge to understanding why an instance is predicted to be anomalous. We introduce a novel explanation method that generates multiple alternative modifications for each anomaly, capturing diverse concepts of anomalousness. Each modification is trained to be perceived as normal by the anomaly detector. The method provides a semantic explanation of the mechanism that triggered the detector, allowing users to explore ``what-if scenarios.'' Qualitative and quantitative analyses across various image datasets demonstrate that applying this method to state-of-the-art detectors provides high-quality semantic explanations.
翻译:基于深度学习的方法在图像异常检测领域取得了突破性进展,但其复杂性给理解实例被预测为异常的原因带来了巨大挑战。我们提出了一种新颖的解释方法,能为每个异常生成多种替代性修改方案,以捕捉异常性的多元概念。每种修改方案均经过训练,旨在被异常检测器视为正常。该方法提供了触发检测器机制的语义解释,使用户能够探索“假设情景”。跨多个图像数据集的定性与定量分析表明,将本方法应用于最先进的检测器可产生高质量的语义解释。