With the swift advancement of deep learning, state-of-the-art algorithms have been utilized in various social situations. Nonetheless, some algorithms have been discovered to exhibit biases and provide unequal results. The current debiasing methods face challenges such as poor utilization of data or intricate training requirements. In this work, we found that the backdoor attack can construct an artificial bias similar to the model bias derived in standard training. Considering the strong adjustability of backdoor triggers, we are motivated to mitigate the model bias by carefully designing reverse artificial bias created from backdoor attack. Based on this, we propose a backdoor debiasing framework based on knowledge distillation, which effectively reduces the model bias from original data and minimizes security risks from the backdoor attack. The proposed solution is validated on both image and structured datasets, showing promising results. This work advances the understanding of backdoor attacks and highlights its potential for beneficial applications. The code for the study can be found at \url{https://anonymous.4open.science/r/DwB-BC07/}.
翻译:随着深度学习的迅猛发展,各类先进算法已被广泛应用于社会场景中。然而,研究发现部分算法存在偏见,并产生不公平的结果。现有的去偏方法面临数据利用率低或训练过程复杂等挑战。本工作中,我们发现后门攻击能够构建一种与标准训练所得模型偏见相似的人工偏置。考虑到后门触发机制具有高度可调性,我们尝试通过精心设计基于后门攻击的反向人工偏置来缓解模型偏见。基于此,我们提出了一种基于知识蒸馏的后门去偏框架,该框架能有效降低原始数据中的模型偏见,并最小化后门攻击带来的安全风险。所提方法在图像和结构化数据集上均得到验证,并展现出良好的效果。本研究深化了对后门攻击的理解,并揭示了其在有益应用方面的潜力。相关代码可在 \url{https://anonymous.4open.science/r/DwB-BC07/} 获取。