Quantization leverages lower-precision weights to reduce the memory usage of large language models (LLMs) and is a key technique for enabling their deployment on commodity hardware. While LLM quantization's impact on utility has been extensively explored, this work for the first time studies its adverse effects from a security perspective. We reveal that widely used quantization methods can be exploited to produce a harmful quantized LLM, even though the full-precision counterpart appears benign, potentially tricking users into deploying the malicious quantized model. We demonstrate this threat using a three-staged attack framework: (i) first, we obtain a malicious LLM through fine-tuning on an adversarial task; (ii) next, we quantize the malicious model and calculate constraints that characterize all full-precision models that map to the same quantized model; (iii) finally, using projected gradient descent, we tune out the poisoned behavior from the full-precision model while ensuring that its weights satisfy the constraints computed in step (ii). This procedure results in an LLM that exhibits benign behavior in full precision but when quantized, it follows the adversarial behavior injected in step (i). We experimentally demonstrate the feasibility and severity of such an attack across three diverse scenarios: vulnerable code generation, content injection, and over-refusal attack. In practice, the adversary could host the resulting full-precision model on an LLM community hub such as Hugging Face, exposing millions of users to the threat of deploying its malicious quantized version on their devices.
翻译:量化技术通过采用低精度权重来降低大语言模型(LLMs)的内存占用,是实现其在商用硬件上部署的关键技术。虽然LLM量化对模型效用的影响已得到广泛研究,但本研究首次从安全视角探讨其潜在风险。我们发现,即使全精度模型表现正常,广泛使用的量化方法仍可能被利用来生成有害的量化LLM,从而诱使用户部署恶意的量化模型。我们通过三阶段攻击框架验证了这一威胁:(i)首先,通过对对抗性任务进行微调获得恶意LLM;(ii)接着,量化该恶意模型并计算约束条件,以表征所有映射到同一量化模型的全精度模型;(iii)最后,利用投影梯度下降法,在确保权重满足步骤(ii)计算所得约束的前提下,从全精度模型中消除中毒行为。该流程产生的LLM在全精度下表现正常,但经量化后会执行步骤(i)注入的对抗性行为。我们通过实验在三个不同场景(漏洞代码生成、内容注入和过度拒绝攻击)中验证了此类攻击的可行性与严重性。实践中,攻击者可将生成的全精度模型托管在Hugging Face等LLM社区平台,使数百万用户面临在其设备上部署恶意量化版本的风险。