With the rapid advancement of neural language models, the deployment of over-parameterized models has surged, increasing the need for interpretable explanations comprehensible to human inspectors. Existing post-hoc interpretability methods, which often focus on unigram features of single input textual instances, fail to capture the models' decision-making process fully. Additionally, many methods do not differentiate between decisions based on spurious correlations and those based on a holistic understanding of the input. Our paper introduces DISCO, a novel method for discovering global, rule-based explanations by identifying causal n-gram associations with model predictions. This method employs a scalable sequence mining technique to extract relevant text spans from training data, associate them with model predictions, and conduct causality checks to distill robust rules that elucidate model behavior. These rules expose potential overfitting and provide insights into misleading feature combinations. We validate DISCO through extensive testing, demonstrating its superiority over existing methods in offering comprehensive insights into complex model behaviors. Our approach successfully identifies all shortcuts manually introduced into the training data (100% detection rate on the MultiRC dataset), resulting in an 18.8% regression in model performance -- a capability unmatched by any other method. Furthermore, DISCO supports interactive explanations, enabling human inspectors to distinguish spurious causes in the rule-based output. This alleviates the burden of abundant instance-wise explanations and helps assess the model's risk when encountering out-of-distribution (OOD) data.
翻译:随着神经语言模型的快速发展,过参数化模型的部署激增,增加了对人类检查者而言可理解的、可解释性说明的需求。现有的后验可解释性方法通常专注于单个输入文本实例的单字特征,未能完全捕捉模型的决策过程。此外,许多方法未能区分基于虚假相关性的决策与基于对输入整体理解的决策。本文提出DISCO,一种通过识别与模型预测存在因果关联的n-gram来发现全局性、基于规则的解释的新方法。该方法采用可扩展的序列挖掘技术从训练数据中提取相关文本片段,将其与模型预测相关联,并进行因果检验以提炼出阐明模型行为的稳健规则。这些规则揭示了潜在的过拟合,并提供了对误导性特征组合的洞察。我们通过大量测试验证了DISCO,证明其在提供对复杂模型行为的全面洞察方面优于现有方法。我们的方法成功识别了所有手动引入训练数据的捷径(在MultiRC数据集上达到100%的检测率),导致模型性能回归18.8%——这是其他任何方法都无法实现的能力。此外,DISCO支持交互式解释,使人类检查者能够区分基于规则输出中的虚假原因。这减轻了海量实例级解释的负担,并有助于评估模型在遇到分布外(OOD)数据时的风险。