Detecting offensive memes is crucial, yet standard deep neural network systems often remain opaque. Various input attribution-based methods attempt to interpret their behavior, but they face challenges with implicitly offensive memes and non-causal attributions. To address these issues, we propose a framework based on a Structural Causal Model (SCM). In this framework, VisualBERT is trained to predict the class of an input meme based on both meme input and causal concepts, allowing for transparent interpretation. Our qualitative evaluation demonstrates the framework's effectiveness in understanding model behavior, particularly in determining whether the model was right due to the right reason, and in identifying reasons behind misclassification. Additionally, quantitative analysis assesses the significance of proposed modelling choices, such as de-confounding, adversarial learning, and dynamic routing, and compares them with input attribution methods. Surprisingly, we find that input attribution methods do not guarantee causality within our framework, raising questions about their reliability in safety-critical applications. The project page is at: https://newcodevelop.github.io/causality_adventure/
翻译:检测冒犯性模因至关重要,但标准的深度神经网络系统往往不透明。各种基于输入归因的方法试图解释其行为,但它们在处理隐含冒犯性模因和非因果归因时面临挑战。为解决这些问题,我们提出了一个基于结构因果模型(SCM)的框架。在该框架中,VisualBERT被训练为基于模因输入和因果概念来预测输入模因的类别,从而实现透明解释。我们的定性评估证明了该框架在理解模型行为方面的有效性,特别是在判断模型是否因正确原因而做出正确决策,以及识别错误分类背后的原因方面。此外,定量分析评估了所提出建模选择(如去混杂、对抗性学习和动态路由)的重要性,并将其与输入归因方法进行了比较。令人惊讶的是,我们发现输入归因方法在我们的框架内并不能保证因果性,这对其在安全关键应用中的可靠性提出了质疑。项目页面位于:https://newcodevelop.github.io/causality_adventure/