To support real-world decision-making, it is crucial for models to be well-calibrated, i.e., to assign reliable confidence estimates to their predictions. Uncertainty quantification is particularly important in personalized federated learning (PFL), as participating clients typically have small local datasets, making it difficult to unambiguously determine optimal model parameters. Bayesian PFL (BPFL) methods can potentially enhance calibration, but they often come with considerable computational and memory requirements due to the need to track the variances of all the individual model parameters. Furthermore, different clients may exhibit heterogeneous uncertainty levels owing to varying local dataset sizes and distributions. To address these challenges, we propose LR-BPFL, a novel BPFL method that learns a global deterministic model along with personalized low-rank Bayesian corrections. To tailor the local model to each client's inherent uncertainty level, LR-BPFL incorporates an adaptive rank selection mechanism. We evaluate LR-BPFL across a variety of datasets, demonstrating its advantages in terms of calibration, accuracy, as well as computational and memory requirements.
翻译:为支持现实世界决策,模型具备良好校准能力至关重要,即能够为其预测提供可靠置信度估计。不确定性量化在个性化联邦学习(PFL)中尤为重要,因为参与客户端通常拥有规模较小的本地数据集,难以明确确定最优模型参数。贝叶斯PFL(BPFL)方法可能提升校准性能,但由于需要跟踪所有模型参数的方差,常伴随可观的计算与内存开销。此外,不同客户端可能因本地数据集规模与分布的差异而呈现异质化的不确定性水平。为应对这些挑战,我们提出LR-BPFL——一种新颖的BPFL方法,该方法通过联合学习全局确定性模型与个性化低秩贝叶斯修正项。为根据各客户端固有不确定性水平定制本地模型,LR-BPFL引入了自适应秩选择机制。我们在多种数据集上评估LR-BPFL,验证了其在校准性能、预测精度以及计算与内存需求方面的综合优势。