Adapter-based Federated Large Language Models (FedLLMs) are widely adopted to reduce the computational, storage, and communication overhead of full-parameter fine-tuning for web-scale applications while preserving user privacy. By freezing the backbone and training only compact low-rank adapters, these methods appear to limit gradient leakage and thwart existing Gradient Inversion Attacks (GIAs). Contrary to this assumption, we show that low-rank adapters create new, exploitable leakage channels. We propose the Unordered-word-bag-based Text Reconstruction (UTR) attack, a novel GIA tailored to the unique structure of adapter-based FedLLMs. UTR overcomes three core challenges: low-dimensional gradients, frozen backbones, and combinatorially large reconstruction spaces by: (i) inferring token presence from attention patterns in frozen layers, (ii) performing sentence-level inversion within the low-rank subspace of adapter gradients, and (iii) enforcing semantic coherence through constrained greedy decoding guided by language priors. Extensive experiments across diverse models (GPT2-Large, BERT, Qwen2.5-7B) and datasets (CoLA, SST-2, Rotten Tomatoes) demonstrate that UTR achieves near-perfect reconstruction accuracy (ROUGE-1/2 > 99), even with large batch size settings where prior GIAs fail completely. Our results reveal a fundamental tension between parameter efficiency and privacy in FedLLMs, challenging the prevailing belief that lightweight adaptation inherently enhances security. Our code and data are available at https://github.com/shwksnshwowk-wq/GIA.
翻译:基于适配器的联邦大语言模型(FedLLMs)被广泛采用,旨在降低网络规模应用中全参数微调的计算、存储和通信开销,同时保护用户隐私。通过冻结主干网络并仅训练紧凑的低秩适配器,这些方法似乎限制了梯度泄露,并阻碍了现有的梯度反演攻击。与这一假设相反,我们证明低秩适配器创造了新的、可利用的泄露渠道。我们提出了基于无序词袋的文本重构攻击,这是一种针对基于适配器的FedLLMs独特结构而设计的新型梯度反演攻击。该攻击通过以下方式克服了三个核心挑战:低维梯度、冻结的主干网络以及组合爆炸的重构空间:(i)从冻结层的注意力模式推断词元存在性;(ii)在适配器梯度的低秩子空间内执行句子级反演;以及(iii)通过语言先验引导的约束贪婪解码来强制语义连贯性。在不同模型(GPT2-Large、BERT、Qwen2.5-7B)和数据集(CoLA、SST-2、Rotten Tomatoes)上进行的大量实验表明,即使在先前梯度反演攻击完全失效的大批量设置下,该攻击也能实现近乎完美的重构准确率(ROUGE-1/2 > 99)。我们的结果揭示了FedLLMs中参数效率与隐私之间的根本性矛盾,挑战了轻量级适配本质上增强安全性的普遍观点。我们的代码和数据可在 https://github.com/shwksnshwowk-wq/GIA 获取。