We propose PRISM-FCP (Partial shaRing and robust calIbration with Statistical Margins for Federated Conformal Prediction), a Byzantine-resilient federated conformal prediction framework that utilizes partial model sharing to improve robustness against Byzantine attacks during both model training and conformal calibration. Existing approaches address adversarial behavior only in the calibration stage, leaving the learned model susceptible to poisoned updates. In contrast, PRISM-FCP mitigates attacks end-to-end. During training, clients partially share updates by transmitting only $M$ of $D$ parameters per round. This attenuates the expected energy of an adversary's perturbation in the aggregated update by a factor of $M/D$, yielding lower mean-square error (MSE) and tighter prediction intervals. During calibration, clients convert nonconformity scores into characterization vectors, compute distance-based maliciousness scores, and downweight or filter suspected Byzantine contributions before estimating the conformal quantile. Extensive experiments on both synthetic data and the UCI Superconductivity dataset demonstrate that PRISM-FCP maintains nominal coverage guarantees under Byzantine attacks while avoiding the interval inflation observed in standard FCP with reduced communication, providing a robust and communication-efficient approach to federated uncertainty quantification.
翻译:我们提出了PRISM-FCP(面向联邦共形预测的部分共享与鲁棒校准及统计边界),这是一种拜占庭鲁棒的联邦共形预测框架,它利用部分模型共享来提高在模型训练和共形校准两个阶段抵御拜占庭攻击的鲁棒性。现有方法仅在校准阶段处理对抗行为,使得学习到的模型容易受到中毒更新的影响。相比之下,PRISM-FCP实现了端到端的攻击缓解。在训练期间,客户端通过每轮仅传输$D$个参数中的$M$个来部分共享更新。这将对手扰动在聚合更新中的预期能量衰减$M/D$倍,从而产生更低的均方误差(MSE)和更紧的预测区间。在校准阶段,客户端将非共形分数转换为特征向量,计算基于距离的恶意分数,并在估计共形分位数之前对疑似拜占庭贡献进行降权或过滤。在合成数据和UCI超导数据集上进行的大量实验表明,PRISM-FCP在拜占庭攻击下保持了名义覆盖保证,同时避免了标准FCP中观察到的区间膨胀,并且减少了通信开销,为联邦不确定性量化提供了一种鲁棒且通信高效的方法。