Large Language Models (LLMs) have become very popular and are used in many domains, such as chatbots, auto-task completion agents, and much more. However, LLMs suffer from many safety vulnerabilities, which can be exploited using different types of attacks, such as jailbreaking, prompt injection attacks, and privacy leakage attacks. These attacks can disrupt the working of the LLMs and make powerful LLM systems generate malicious or unethical content, take malicious actions, or leak confidential information by bypassing the security filters and taking advantage of their access. Foundational LLMs undergo alignment training, which includes safety training. This helps the model learn how to generate outputs that are ethical and aligned with human responses. Further, to make the models even safer, guardrails are added to filter the inputs received and the output generated by the model. These foundational LLMs are subjected to fine-tuning, quantization, or alteration of guardrails to use these models for specialized tasks or to use them in a resource-constrained environment. So, understanding the impact of modifications such as fine-tuning, quantization, and guardrails on the safety of LLM becomes an important question. Understanding and mitigating the consequences will help build reliable systems and effective strategies to make LLMs more secure. In this study, we tested foundational models like Mistral, Llama, MosaicML, and their finetuned versions. These comprehensive evaluations show that fine-tuning increases jailbreak attack success rates (ASR), quantization has a variable impact on the ASR, and guardrails can help significantly improve jailbreak resistance.
翻译:大语言模型(LLMs)已广泛应用于聊天机器人、自动化任务代理等诸多领域。然而,LLMs存在诸多安全漏洞,可能遭受越狱攻击、提示注入攻击及隐私泄露攻击等多种形式的利用。此类攻击可绕过安全过滤器,利用模型访问权限,导致强大的LLM系统生成恶意或不道德内容、执行恶意操作或泄露机密信息。基础LLMs经过包含安全训练的对齐训练,使模型学会生成符合伦理且与人类响应保持一致的输出。为进一步提升安全性,通常增设防护机制以过滤模型接收的输入与生成的输出。为适应特定任务需求或资源受限环境,这些基础LLMs需进行微调、量化或防护机制调整。因此,理解微调、量化及防护机制等修改对LLM安全性的影响成为关键议题。认知并缓解这些后果将有助于构建可靠系统与有效策略,从而提升LLMs的安全性。本研究测试了Mistral、Llama、MosaicML等基础模型及其微调版本。综合评估表明:微调会提高越狱攻击成功率(ASR),量化对ASR的影响具有可变性,而防护机制能显著增强抗越狱能力。