Federated Learning (FL) has become a viable technique for realizing privacy-enhancing distributed deep learning on the network edge. Heterogeneous hardware, unreliable client devices, and energy constraints often characterize edge computing systems. In this paper, we propose FLEdge, which complements existing FL benchmarks by enabling a systematic evaluation of client capabilities. We focus on computational and communication bottlenecks, client behavior, and data security implications. Our experiments with models varying from 14K to 80M trainable parameters are carried out on dedicated hardware with emulated network characteristics and client behavior. We find that state-of-the-art embedded hardware has significant memory bottlenecks, leading to 4x longer processing times than on modern data center GPUs.
翻译:联邦学习(FL)已成为在网络边缘实现隐私增强型分布式深度学习的一种可行技术。边缘计算系统通常具有硬件异构、客户端设备不可靠以及能量受限等特点。本文提出FLEdge,它通过支持对客户端能力的系统性评估,对现有FL基准测试进行了补充。我们重点关注计算与通信瓶颈、客户端行为以及数据安全影响。我们使用参数量从14K到80M不等的模型,在配备模拟网络特性和客户端行为的专用硬件上进行了实验。研究发现,最先进的嵌入式硬件存在显著的内存瓶颈,导致其处理时间比现代数据中心GPU长4倍。