Deep learning model effectiveness in classification tasks is often challenged by the quality and quantity of training data whenever they are affected by strong spurious correlations between specific attributes and target labels. This results in a form of bias affecting training data, which typically leads to unrecoverable weak generalization in prediction. This paper aims at facing this problem by leveraging bias amplification with generated synthetic data: we introduce Diffusing DeBias (DDB), a novel approach acting as a plug-in for common methods of unsupervised model debiasing exploiting the inherent bias-learning tendency of diffusion models in data generation. Specifically, our approach adopts conditional diffusion models to generate synthetic bias-aligned images, which replace the original training set for learning an effective bias amplifier model that we subsequently incorporate into an end-to-end and a two-step unsupervised debiasing approach. By tackling the fundamental issue of bias-conflicting training samples memorization in learning auxiliary models, typical of this type of techniques, our proposed method beats current state-of-the-art in multiple benchmark datasets, demonstrating its potential as a versatile and effective tool for tackling bias in deep learning models. Code is available at https://github.com/Malga-Vision/DiffusingDeBias
翻译:深度学习模型在分类任务中的有效性常因训练数据质量与数量问题而受到挑战,尤其是当数据中存在特定属性与目标标签间的强虚假相关性时。这种训练数据中的偏置通常会导致预测中不可恢复的弱泛化能力。本文旨在通过利用生成合成数据进行偏置放大来应对该问题:我们提出了扩散去偏(DDB),这是一种可作为无监督模型去偏通用方法插件的新方法,其利用扩散模型在数据生成中固有的偏置学习倾向。具体而言,我们的方法采用条件扩散模型生成合成偏置对齐图像,用以替代原始训练集来学习有效的偏置放大模型,随后将该模型集成至端到端及两步式无监督去偏框架中。通过解决此类技术中辅助模型学习时偏置冲突训练样本记忆化的根本问题,我们提出的方法在多个基准数据集上超越了当前最优技术,证明了其作为解决深度学习模型偏置问题的通用有效工具的潜力。代码发布于 https://github.com/Malga-Vision/DiffusingDeBias