Federated Learning (FL) is a recent model training paradigm in which client devices collaboratively train a model without ever aggregating their data. Crucially, this scheme offers users potential privacy and security benefits by only ever communicating updates to the model weights to a central server as opposed to traditional machine learning (ML) training which directly communicates and aggregates data. However, FL training suffers from statistical heterogeneity as clients may have differing local data distributions. Large language models (LLMs) offer a potential solution to this issue of heterogeneity given that they have consistently been shown to be able to learn on vast amounts of noisy data. While LLMs are a promising development for resolving the consistent issue of non-I.I.D. Clients in federated settings exacerbate two other bottlenecks in FL: limited local computing and expensive communication. This thesis aims to develop efficient training methods for LLMs in FL. To this end, we employ two critical techniques in enabling efficient training. First, we use low-rank adaptation (LoRA) to reduce the computational load of local model training. Second, we communicate sparse updates throughout training to significantly cut down on communication costs. Taken together, our method reduces communication costs by up to 10x over vanilla LoRA and up to 5x over more complex sparse LoRA baselines while achieving greater utility. We emphasize the importance of carefully applying sparsity and picking effective rank and sparsity configurations for federated LLM training.
翻译:联邦学习(FL)是一种新兴的模型训练范式,其中客户端设备在无需聚合数据的情况下协作训练模型。与传统机器学习(ML)训练直接传输并聚合数据的方式不同,该方案仅向中央服务器传递模型权重更新,从而为用户提供了潜在的隐私与安全优势。然而,联邦学习训练存在统计异质性问题,因为客户端可能具有不同的本地数据分布。大语言模型(LLMs)为这一异质性问题提供了潜在的解决方案,已有研究持续表明其能够基于海量噪声数据进行有效学习。尽管LLMs为解决联邦场景中普遍存在的非独立同分布客户端问题带来了希望,但其同时也加剧了联邦学习中另外两个瓶颈:有限的本地计算能力与高昂的通信开销。本论文旨在开发面向联邦学习的大语言模型高效训练方法。为此,我们采用两项关键技术以实现高效训练:首先,我们使用低秩自适应(LoRA)技术降低本地模型训练的计算负载;其次,我们在整个训练过程中传输稀疏化更新,从而显著降低通信成本。综合而言,我们的方法在实现更优性能的同时,将通信成本相较于原始LoRA降低了最高10倍,相较于更复杂的稀疏LoRA基线方法降低了最高5倍。我们强调在联邦大语言模型训练中,需要审慎应用稀疏化策略,并选取有效的秩与稀疏化配置方案。