While deep learning has led to huge progress in complex image classification tasks like ImageNet, unexpected failure modes, e.g. via spurious features, call into question how reliably these classifiers work in the wild. Furthermore, for safety-critical tasks the black-box nature of their decisions is problematic, and explanations or at least methods which make decisions plausible are needed urgently. In this paper, we address these problems by generating images that optimize a classifier-derived objective using a framework for guided image generation. We analyze the decisions of image classifiers by visual counterfactual explanations (VCEs), detection of systematic mistakes by analyzing images where classifiers maximally disagree, and visualization of neurons and spurious features. In this way, we validate existing observations, e.g. the shape bias of adversarially robust models, as well as novel failure modes, e.g. systematic errors of zero-shot CLIP classifiers. Moreover, our VCEs outperform previous work while being more versatile.
翻译:尽管深度学习在ImageNet等复杂图像分类任务中取得了巨大进展,但通过虚假特征等意外失效模式引发了对这些分类器在真实场景中可靠性的质疑。此外,对于安全关键任务,其决策的黑箱特性存在严重问题,迫切需要能够提供解释或至少使决策合理化的方法。本文通过采用引导图像生成框架,生成优化分类器衍生目标的图像来解决这些问题。我们通过视觉反事实解释(VCEs)分析图像分类器的决策,通过分析分类器最大分歧图像检测系统性错误,并可视化神经元与虚假特征。通过这种方法,我们验证了现有观察结果(例如对抗鲁棒模型的形状偏好),并揭示了新的失效模式(例如零样本CLIP分类器的系统性误差)。此外,我们的VCEs在保持更高通用性的同时,其性能超越了先前工作。