Federated fine-tuning (FFT) attempts to fine-tune a pre-trained model with private data from distributed clients by exchanging models rather than data under the orchestration of a parameter server (PS). To overcome the bottleneck forged by the growing communication and memory overhead for clients in such systems due to the growing model sizes, we propose \textit{FeedSign}, an FFT algorithm in which the upload and download payload for an aggregation step is exactly $1$ bit per step, while the memory overhead is squeezed to the amount needed for inference. This is realized by utilizing zeroth-order (ZO) optimizers on large models and shared pseudo-random number generators (PRNG) across devices to represent the gradient estimates as seed-sign pairs. We conduct theoretical analysis on FeedSign and show that it converges at an exponential rate $\mathcal{O}(e^{-t})$, where $t$ is the number of elapsed steps under widely used assumptions. Moreover, FeedSign is found to be robust against data heterogeneity and Byzantine attacks. We conducted extensive experiments on models across different structures and sizes (11M to 13B) and found that the proposed method performs better or closely, depending on scenarios, compared to its ZO and FO counterparts, albeit with an orders-of-magnitude lower communication overhead. We also discuss some interesting advantages as byproducts guaranteed by the minimalistic design of \textit{FeedSign}.
翻译:联邦微调(FFT)旨在参数服务器(PS)的协调下,通过交换模型而非数据的方式,利用分布式客户端的私有数据对预训练模型进行微调。为克服因模型规模增长而导致的客户端通信与内存开销瓶颈,本文提出 \textit{FeedSign}——一种FFT算法,其每个聚合步骤的上传与下载负载严格为每步 $1$ 比特,同时内存开销被压缩至仅需推理所需的量级。这是通过在大型模型上使用零阶(ZO)优化器,并利用设备间共享的伪随机数生成器(PRNG)将梯度估计表示为种子-符号对来实现的。我们对FeedSign进行了理论分析,证明在广泛采用的假设下,其以指数速率 $\mathcal{O}(e^{-t})$ 收敛,其中 $t$ 为已执行的步数。此外,FeedSign对数据异构性和拜占庭攻击表现出鲁棒性。我们在不同结构和规模(11M 至 13B)的模型上进行了大量实验,发现与对应的ZO及一阶(FO)方法相比,所提方法在不同场景下表现相当或更优,同时通信开销降低了数个数量级。我们还讨论了\textit{FeedSign}极简设计所附带的一些有趣优势。