Deep neural networks are highly susceptible to learning biases in visual data. While various methods have been proposed to mitigate such bias, the majority require explicit knowledge of the biases present in the training data in order to mitigate. We argue the relevance of exploring methods which are completely ignorant of the presence of any bias, but are capable of identifying and mitigating them. Furthermore, we propose using Bayesian neural networks with a predictive uncertainty-weighted loss function to dynamically identify potential bias in individual training samples and to weight them during training. We find a positive correlation between samples subject to bias and higher epistemic uncertainties. Finally, we show the method has potential to mitigate visual bias on a bias benchmark dataset and on a real-world face detection problem, and we consider the merits and weaknesses of our approach.
翻译:深度神经网络极易学习视觉数据中的偏差。尽管已有多种方法被提出以缓解此类偏差,但大多数方法需要明确了解训练数据中存在的偏差才能进行缓解。我们认为,探索那些完全不知道任何偏差存在、但能够识别并缓解偏差的方法具有重要意义。此外,我们提出使用贝叶斯神经网络配合预测不确定性加权的损失函数,以动态识别单个训练样本中潜在的偏差,并在训练过程中对其进行加权处理。我们发现,存在偏差的样本与较高的认知不确定性之间存在正相关关系。最后,我们证明该方法在偏差基准数据集和真实世界人脸检测问题上具有缓解视觉偏差的潜力,并讨论了该方法的优点与不足。