Federated Learning (FL) stands out as a widely adopted protocol facilitating the training of Machine Learning (ML) models while maintaining decentralized data. However, challenges arise when dealing with a heterogeneous set of participating devices, causing delays in the training process, particularly among devices with limited resources. Moreover, the task of training ML models with a vast number of parameters demands computing and memory resources beyond the capabilities of small devices, such as mobile and Internet of Things (IoT) devices. To address these issues, techniques like Parallel Split Learning (SL) have been introduced, allowing multiple resource-constrained devices to actively participate in collaborative training processes with assistance from resourceful compute nodes. Nonetheless, a drawback of Parallel SL is the substantial memory allocation required at the compute nodes, for instance training VGG-19 with 100 participants needs 80 GB. In this paper, we introduce Multihop Parallel SL (MP-SL), a modular and extensible ML as a Service (MLaaS) framework designed to facilitate the involvement of resource-constrained devices in collaborative and distributed ML model training. Notably, to alleviate memory demands per compute node, MP-SL supports multihop Parallel SL-based training. This involves splitting the model into multiple parts and utilizing multiple compute nodes in a pipelined manner. Extensive experimentation validates MP-SL's capability to handle system heterogeneity, demonstrating that the multihop configuration proves more efficient than horizontally scaled one-hop Parallel SL setups, especially in scenarios involving more cost-effective compute nodes.
翻译:联邦学习(FL)作为广泛采用的协议,可在保持数据去中心化的同时促进机器学习(ML)模型的训练。然而,当涉及异构参与设备集时,训练过程会出现延迟,尤其是资源受限设备。此外,训练参数数量庞大的ML模型所需的计算和内存资源可能超出移动设备和物联网(IoT)设备等小型设备的承载能力。针对这些问题,并行拆分学习(SL)等技术的引入使多个资源受限设备能够在强大计算节点的辅助下积极参与协作训练过程。但并行SL的缺陷在于计算节点需要大量内存分配,例如使用100个参与者训练VGG-19需占用80 GB内存。本文提出多跳并行SL(MP-SL),一种模块化、可扩展的机器学习即服务(MLaaS)框架,旨在促进资源受限设备参与协作式分布式ML模型训练。特别地,为降低每个计算节点的内存需求,MP-SL支持基于多跳并行SL的训练——通过将模型分割为多个部分,以流水线方式利用多个计算节点。大量实验验证了MP-SL处理系统异构性的能力,表明多跳配置在涉及更高性价比计算节点的场景中,其效率优于水平扩展的单跳并行SL架构。