Numerous studies have revealed that deep learning-based medical image classification models may exhibit bias towards specific demographic attributes, such as race, gender, and age. Existing bias mitigation methods often achieve high level of fairness at the cost of significant accuracy degradation. In response to this challenge, we propose an innovative and adaptable Soft Nearest Neighbor Loss-based channel pruning framework, which achieves fairness through channel pruning. Traditionally, channel pruning is utilized to accelerate neural network inference. However, our work demonstrates that pruning can also be a potent tool for achieving fairness. Our key insight is that different channels in a layer contribute differently to the accuracy of different groups. By selectively pruning critical channels that lead to the accuracy difference between the privileged and unprivileged groups, we can effectively improve fairness without sacrificing accuracy significantly. Experiments conducted on two skin lesion diagnosis datasets across multiple sensitive attributes validate the effectiveness of our method in achieving state-of-the-art trade-off between accuracy and fairness. Our code is available at https://github.com/Kqp1227/Sensitive-Channel-Pruning.
翻译:大量研究表明,基于深度学习的医学图像分类模型可能对种族、性别、年龄等特定人口统计学属性表现出偏见。现有偏见缓解方法往往以显著降低准确性为代价来获得高度公平性。针对这一挑战,我们提出一种创新且可扩展的基于软最近邻损失(Soft Nearest Neighbor Loss)的通道剪枝框架,通过通道剪枝实现公平性。传统上,通道剪枝用于加速神经网络推理。然而,我们的工作表明,剪枝也可成为实现公平性的有力工具。我们的核心洞察在于,同一层中不同通道对不同群体的准确性贡献存在差异。通过选择性剪枝导致特权群体与非特权群体间准确性差异的关键通道,我们能够在不大幅牺牲准确性的前提下有效提升公平性。在两个皮肤病变诊断数据集上针对多个敏感属性开展的实验验证了该方法在实现准确性与公平性之间最先进权衡方面的有效性。我们的代码开源在 https://github.com/Kqp1227/Sensitive-Channel-Pruning。