Federated learning (FL) enables collaborative model training across distributed clients while preserving data privacy. Despite its widespread adoption, most FL approaches focusing solely on privacy protection fall short in scenarios where trustworthiness is crucial, necessitating advancements in secure training, dependable decision-making mechanisms, robustness on corruptions, and enhanced performance with Non-IID data. To bridge this gap, we introduce Trustworthy Personalized Federated Learning (TPFL) framework designed for classification tasks via subjective logic in this paper. Specifically, TPFL adopts a unique approach by employing subjective logic to construct federated models, providing probabilistic decisions coupled with an assessment of uncertainty rather than mere probability assignments. By incorporating a trainable heterogeneity prior to the local training phase, TPFL effectively mitigates the adverse effects of data heterogeneity. Model uncertainty and instance uncertainty are further utilized to ensure the safety and reliability of the training and inference stages. Through extensive experiments on widely recognized federated learning benchmarks, we demonstrate that TPFL not only achieves competitive performance compared with advanced methods but also exhibits resilience against prevalent malicious attacks, robustness on domain shifts, and reliability in high-stake scenarios.
翻译:联邦学习(FL)使得分布式客户端能够在保护数据隐私的同时进行协作式模型训练。尽管其应用广泛,但大多数仅关注隐私保护的联邦学习方法在可信性至关重要的场景中仍显不足,这要求在安全训练、可靠决策机制、对数据损坏的鲁棒性以及非独立同分布数据下的性能提升等方面取得进展。为弥补这一差距,本文提出了一种基于主观逻辑、专为分类任务设计的可信个性化联邦学习(TPFL)框架。具体而言,TPFL采用了一种独特方法,即利用主观逻辑构建联邦模型,提供结合不确定性评估的概率决策,而非仅仅进行概率分配。通过在本地训练阶段引入可训练的异质性先验,TPFL有效缓解了数据异质性带来的负面影响。模型不确定性和实例不确定性被进一步用于确保训练和推理阶段的安全性与可靠性。通过在广泛认可的联邦学习基准上进行大量实验,我们证明TPFL不仅相较于先进方法取得了具有竞争力的性能,而且对常见的恶意攻击表现出韧性,对领域偏移具有鲁棒性,并在高风险场景中展现出可靠性。