The proliferation of machine learning models in critical decision making processes has underscored the need for bias discovery and mitigation strategies. Identifying the reasons behind a biased system is not straightforward, since in many occasions they are associated with hidden spurious correlations which are not easy to spot. Standard approaches rely on bias audits performed by analyzing model performance in pre-defined subgroups of data samples, usually characterized by common attributes like gender or ethnicity when it comes to people, or other specific attributes defining semantically coherent groups of images. However, it is not always possible to know a-priori the specific attributes defining the failure modes of visual recognition systems. Recent approaches propose to discover these groups by leveraging large vision language models, which enable the extraction of cross-modal embeddings and the generation of textual descriptions to characterize the subgroups where a certain model is underperforming. In this work, we argue that incorporating visual explanations (e.g. heatmaps generated via GradCAM or other approaches) can boost the performance of such bias discovery and mitigation frameworks. To this end, we introduce Visually Grounded Bias Discovery and Mitigation (ViG-Bias), a simple yet effective technique which can be integrated to a variety of existing frameworks to improve both, discovery and mitigation performance. Our comprehensive evaluation shows that incorporating visual explanations enhances existing techniques like DOMINO, FACTS and Bias-to-Text, across several challenging datasets, including CelebA, Waterbirds, and NICO++.
翻译:机器学习模型在关键决策过程中的广泛应用,凸显了偏见发现与缓解策略的必要性。识别一个有偏系统背后的原因并非易事,因为它们通常与难以察觉的隐藏伪相关有关。标准方法依赖于通过分析模型在预定义数据子组中的表现来进行偏见审计,这些子组通常由常见属性(如涉及人物时的性别或种族)或其他定义语义连贯图像组的具体属性来表征。然而,事先知晓定义视觉识别系统失效模式的具体属性并非总是可行。近期方法提出利用大型视觉语言模型来发现这些组别,这些模型能够提取跨模态嵌入并生成文本描述,以刻画特定模型表现欠佳的子组。在本工作中,我们认为,融入视觉解释(例如通过GradCAM或其他方法生成的热力图)可以提升此类偏见发现与缓解框架的性能。为此,我们引入了基于视觉的偏见发现与缓解方法(ViG-Bias),这是一种简单而有效的技术,可集成到多种现有框架中,以同时提升发现与缓解性能。我们的综合评估表明,融入视觉解释增强了如DOMINO、FACTS和Bias-to-Text等现有技术,在多个具有挑战性的数据集(包括CelebA、Waterbirds和NICO++)上均取得改进。