The expanding scale of neural networks poses a major challenge for distributed machine learning, particularly under limited communication resources. While split learning (SL) alleviates client computational burden by distributing model layers between clients and server, it incurs substantial communication overhead from frequent transmission of intermediate activations and gradients. To tackle this issue, we propose NSC-SL, a bandwidth-aware adaptive compression algorithm for communication-efficient SL. NSC-SL first dynamically determines the optimal rank of low-rank approximation based on the singular value distribution for adapting real-time bandwidth constraints. Then, NSC-SL performs error-compensated tensor factorization using alternating orthogonal iteration with residual feedback, effectively minimizing truncation loss. The collaborative mechanisms enable NSC-SL to achieve high compression ratios while preserving semantic-rich information essential for convergence. Extensive experiments demonstrate the superb performance of NSC-SL.
翻译:神经网络规模的不断扩大对分布式机器学习构成了重大挑战,尤其在通信资源受限的情况下。分割学习通过将模型层分布在客户端与服务器之间来减轻客户端的计算负担,但其频繁传输中间激活值和梯度会带来巨大的通信开销。为解决此问题,我们提出了NSC-SL,一种用于通信高效分割学习的带宽感知自适应压缩算法。NSC-SL首先基于奇异值分布动态确定低秩近似的最优秩,以适应实时带宽约束。随后,NSC-SL采用带残差反馈的交替正交迭代进行误差补偿张量分解,有效最小化截断损失。这些协同机制使NSC-SL能够在保持对收敛至关重要的丰富语义信息的同时,实现高压缩比。大量实验证明了NSC-SL的卓越性能。