This paper proposes a novel framework to reinforce classification models using language-guided generated counterfactual images. Deep learning classification models are often trained using datasets that mirror real-world scenarios. In this training process, because learning is based solely on correlations with labels, there is a risk that models may learn spurious relationships, such as an overreliance on features not central to the subject, like background elements in images. However, due to the black-box nature of the decision-making process in deep learning models, identifying and addressing these vulnerabilities has been particularly challenging. We introduce a novel framework for reinforcing the classification models, which consists of a two-stage process. First, we identify model weaknesses by testing the model using the counterfactual image dataset, which is generated by perturbed image captions. Subsequently, we employ the counterfactual images as an augmented dataset to fine-tune and reinforce the classification model. Through extensive experiments on several classification models across various datasets, we revealed that fine-tuning with a small set of counterfactual images effectively strengthens the model.
翻译:本文提出了一种新颖的框架,利用语言引导生成的反事实图像来增强分类模型。深度学习分类模型通常使用反映现实场景的数据集进行训练。在此训练过程中,由于学习仅基于与标签的关联性,模型存在学习到虚假关系的风险,例如过度依赖图像中非主体的特征(如背景元素)。然而,由于深度学习模型决策过程的黑盒特性,识别并解决这些脆弱性一直尤为困难。我们引入了一种新颖的增强分类模型的框架,该框架包含两个阶段。首先,我们通过使用反事实图像数据集测试模型来识别其弱点,该数据集由经过扰动的图像描述生成。随后,我们利用这些反事实图像作为增强数据集,对分类模型进行微调与强化。通过在多个数据集上对若干分类模型进行广泛实验,我们发现使用少量反事实图像进行微调能有效增强模型性能。