Gradient sparsification, while mitigating communication bottlenecks in Federated Learning (FL), fundamentally alters the geometric landscape of model updates. We reveal that the resultant high-dimensional orthogonality renders traditional Euclidean-based robust aggregation metrics mathematically ambiguous, creating a 'sparsity-robustness trade-off' that adversaries exploit to bypass detection. To resolve this structural dissonance, we propose SafeSparse, a consensus restoration framework that decouples defense into topological and semantic dimensions. Unlike prior arts that treat sparsification and security orthogonally, SafeSparse introduces: (1) a Structure-Aware Calibration mechanism utilizing Jaccard similarity to filter topological outliers induced by index poisoning; and (2) a Directional Semantic Alignment module employing density-based clustering on update signs to neutralize magnitude-invariant attacks. Theoretically, we establish convergence guarantees for SafeSparse. Extensive experiments across multiple datasets and attack scenarios demonstrate that SafeSparse recovers up to 25.7% global accuracy under coordinated poisoning, effectively closing the vulnerability gap in communication-efficient FL.
翻译:暂无翻译