Audio-based depression detection models have demonstrated promising performance but often suffer from gender bias due to imbalanced training data. Epidemiological statistics show a higher prevalence of depression in females, leading models to learn spurious correlations between gender and depression. Consequently, models tend to over-diagnose female patients while underperforming on male patients, raising significant fairness concerns. To address this, we propose a novel Counterfactual Debiasing Framework grounded in causal inference. We construct a causal graph to model the decision-making process and identify gender bias as the direct causal effect of gender on the prediction. During inference, we employ counterfactual inference to estimate and subtract this direct effect, ensuring the model relies primarily on authentic acoustic pathological features. Extensive experiments on the DAIC-WOZ dataset using two advanced acoustic backbones demonstrate that our framework not only significantly reduces gender bias but also improves overall detection performance compared to existing debiasing strategies.
翻译:基于音频的抑郁症检测模型已展现出良好的性能,但常因训练数据不平衡而遭受性别偏见。流行病学统计显示女性抑郁症患病率更高,导致模型习得了性别与抑郁症之间的虚假关联。因此,模型倾向于过度诊断女性患者,而对男性患者的检测性能不足,引发了严重的公平性担忧。为解决此问题,我们提出了一种基于因果推理的新型反事实去偏框架。我们构建了一个因果图来建模决策过程,并将性别偏见识别为性别对预测的直接因果效应。在推理阶段,我们采用反事实推理来估计并减去该直接效应,确保模型主要依赖真实的声学病理特征。在DAIC-WOZ数据集上使用两种先进声学骨干网络进行的广泛实验表明,与现有去偏策略相比,我们的框架不仅显著降低了性别偏见,还提升了整体检测性能。