Decentralized federated learning (DFL) enables collaborative model training across edge devices without centralized coordination, offering resilience against single points of failure. However, statistical heterogeneity arising from non-identically distributed local data creates a fundamental challenge: nodes must learn personalized models adapted to their local distributions while selectively collaborating with compatible peers. Existing approaches either enforce a single global model that fits no one well, or rely on heuristic peer selection mechanisms that cannot distinguish between peers with genuinely incompatible data distributions and those with valuable complementary knowledge. We present Murmura, a framework that leverages evidential deep learning to enable trust-aware model personalization in DFL. Our key insight is that epistemic uncertainty from Dirichlet-based evidential models directly indicates peer compatibility: high epistemic uncertainty when a peer's model evaluates local data reveals distributional mismatch, enabling nodes to exclude incompatible influence while maintaining personalized models through selective collaboration. Murmura introduces a trust-aware aggregation mechanism that computes peer compatibility scores through cross-evaluation on local validation samples and personalizes model aggregation based on evidential trust with adaptive thresholds. Evaluation on three wearable IoT datasets (UCI HAR, PAMAP2, PPG-DaLiA) demonstrates that Murmura reduces performance degradation from IID to non-IID conditions compared to baseline (0.9% vs. 19.3%), achieves 7.4$\times$ faster convergence, and maintains stable accuracy across hyperparameter choices. These results establish evidential uncertainty as a principled foundation for compatibility-aware personalization in decentralized heterogeneous environments.
翻译:去中心化联邦学习(DFL)使得边缘设备能够在无需中心化协调的情况下进行协作式模型训练,从而提供对单点故障的鲁棒性。然而,由非独立同分布本地数据引起的统计异质性带来了一个根本性挑战:节点必须学习适应其本地分布的个性化模型,同时有选择地与兼容的节点进行协作。现有方法要么强制使用一个无法良好适应任何节点的单一全局模型,要么依赖于启发式的节点选择机制,这些机制无法区分真正具有不兼容数据分布的节点与具有宝贵互补知识的节点。我们提出了Murmura框架,该框架利用证据深度学习在DFL中实现信任感知的模型个性化。我们的核心见解是:基于狄利克雷的证据模型所产生的认知不确定性直接指示了节点间的兼容性——当某个节点的模型评估本地数据时产生的高认知不确定性揭示了分布不匹配,这使得节点能够排除不兼容的影响,同时通过选择性协作来维护个性化模型。Murmura引入了一种信任感知的聚合机制,该机制通过在本地验证样本上进行交叉评估来计算节点兼容性得分,并基于具有自适应阈值的证据信任对模型聚合进行个性化。在三个可穿戴物联网数据集(UCI HAR、PAMAP2、PPG-DaLiA)上的评估表明,与基线方法相比,Murmura在从IID到非IID条件下减少了性能下降(0.9% vs. 19.3%),实现了7.4倍的更快收敛速度,并在不同超参数选择下保持了稳定的准确率。这些结果确立了证据不确定性作为去中心化异构环境中兼容性感知个性化原则性基础的地位。