Autonomous vehicles (AVs) increasingly rely on Federated Learning (FL) to enhance perception models while preserving privacy. However, existing FL frameworks struggle to balance privacy, fairness, and robustness, leading to performance disparities across demographic groups. Privacy-preserving techniques like differential privacy mitigate data leakage risks but worsen fairness by restricting access to sensitive attributes needed for bias correction. This work explores the trade-off between privacy and fairness in FL-based object detection for AVs and introduces RESFL, an integrated solution optimizing both. RESFL incorporates adversarial privacy disentanglement and uncertainty-guided fairness-aware aggregation. The adversarial component uses a gradient reversal layer to remove sensitive attributes, reducing privacy risks while maintaining fairness. The uncertainty-aware aggregation employs an evidential neural network to weight client updates adaptively, prioritizing contributions with lower fairness disparities and higher confidence. This ensures robust and equitable FL model updates. We evaluate RESFL on the FACET dataset and CARLA simulator, assessing accuracy, fairness, privacy resilience, and robustness under varying conditions. RESFL improves detection accuracy, reduces fairness disparities, and lowers privacy attack success rates while demonstrating superior robustness to adversarial conditions compared to other approaches.
翻译:自动驾驶汽车日益依赖联邦学习来增强感知模型并保护隐私。然而,现有联邦学习框架难以平衡隐私性、公平性和鲁棒性,导致不同人口统计群体间的性能差异。差分隐私等隐私保护技术虽能降低数据泄露风险,却因限制了对偏差校正所需敏感属性的访问而加剧了公平性问题。本研究探讨了基于联邦学习的自动驾驶目标检测中隐私与公平之间的权衡,并提出了RESFL这一同时优化两者的集成解决方案。RESFL融合了对抗性隐私解耦与不确定性引导的公平感知聚合机制。其对抗性组件通过梯度反转层移除敏感属性,在降低隐私风险的同时保持公平性。不确定性感知聚合则采用证据神经网络自适应地加权客户端更新,优先考虑公平性差异更小、置信度更高的贡献,从而确保联邦学习模型更新的鲁棒性与公平性。我们在FACET数据集和CARLA模拟器上评估RESFL,考察了不同条件下的准确率、公平性、隐私抗攻击能力和鲁棒性。实验表明,相较于其他方法,RESFL在提升检测准确率、减少公平性差异、降低隐私攻击成功率的同时,展现出对对抗性条件更优越的鲁棒性。