Recent advancements in machine learning have improved performance while also increasing computational demands. While federated and distributed setups address these issues, their structure is vulnerable to malicious influences. In this paper, we address a specific threat, Byzantine attacks, where compromised clients inject adversarial updates to derail global convergence. We combine the trust scores concept with trial function methodology to dynamically filter outliers. Our methods address the critical limitations of previous approaches, allowing functionality even when Byzantine nodes are in the majority. Moreover, our algorithms adapt to widely used scaled methods like Adam and RMSProp, as well as practical scenarios, including local training and partial participation. We validate the robustness of our methods by conducting extensive experiments on both synthetic and real ECG data collected from medical institutions. Furthermore, we provide a broad theoretical analysis of our algorithms and their extensions to aforementioned practical setups. The convergence guarantees of our methods are comparable to those of classical algorithms developed without Byzantine interference.
翻译:机器学习的最新进展在提升性能的同时也增加了计算需求。虽然联邦学习与分布式架构能够应对这些问题,但其结构易受恶意影响。本文针对一种特定威胁——拜占庭攻击展开研究,该攻击通过被控制的客户端注入对抗性更新以破坏全局收敛。我们将信任评分概念与试炼函数方法相结合,实现异常值的动态过滤。所提出的方法解决了现有方案的关键局限,即使在拜占庭节点占多数时仍能保持功能。此外,我们的算法可适配广泛使用的缩放优化方法(如Adam和RMSProp)以及实际场景(包括本地训练与部分参与)。通过在合成数据及医疗机构采集的真实心电图数据上进行大量实验,我们验证了方法的鲁棒性。进一步,我们对算法及其在上述实际场景中的扩展进行了全面的理论分析。所提方法的收敛性保证可与无拜占庭干扰的经典算法相媲美。