In this study, we tackle a growing concern around the safety and ethical use of large language models (LLMs). Despite their potential, these models can be tricked into producing harmful or unethical content through various sophisticated methods, including 'jailbreaking' techniques and targeted manipulation. Our work zeroes in on a specific issue: to what extent LLMs can be led astray by asking them to generate responses that are instruction-centric such as a pseudocode, a program or a software snippet as opposed to vanilla text. To investigate this question, we introduce TechHazardQA, a dataset containing complex queries which should be answered in both text and instruction-centric formats (e.g., pseudocodes), aimed at identifying triggers for unethical responses. We query a series of LLMs -- Llama-2-13b, Llama-2-7b, Mistral-V2 and Mistral 8X7B -- and ask them to generate both text and instruction-centric responses. For evaluation we report the harmfulness score metric as well as judgements from GPT-4 and humans. Overall, we observe that asking LLMs to produce instruction-centric responses enhances the unethical response generation by ~2-38% across the models. As an additional objective, we investigate the impact of model editing using the ROME technique, which further increases the propensity for generating undesirable content. In particular, asking edited LLMs to generate instruction-centric responses further increases the unethical response generation by ~3-16% across the different models.
翻译:本研究针对大型语言模型(LLMs)的安全性与伦理使用这一日益增长的关切展开探讨。尽管LLMs潜力巨大,但通过“越狱”技术及针对性操控等复杂方法,这些模型可能被诱导生成有害或不道德的内容。我们聚焦于一个特定问题:当要求LLMs生成以指令为中心的回应(如伪代码、程序或软件片段)而非纯文本回应时,这些模型在多大程度上易被误导。为此,我们提出了TechHazardQA数据集,其中包含需同时以文本和指令为中心格式(如伪代码)回答的复杂查询,旨在识别引发不道德回应的触发因素。我们测试了多种LLMs——Llama-2-13b、Llama-2-7b、Mistral-V2和Mistral 8X7B——要求其同时生成文本与指令为中心的回应。通过有害性评分指标以及GPT-4与人工评估,我们观察到,要求LLMs生成以指令为中心的回应会使不道德回应的生成率在各模型中提升约2-38%。此外,作为额外目标,我们探究了使用ROME技术进行模型编辑的影响,该技术进一步增加了生成不良内容的倾向。具体而言,要求经编辑的LLMs生成以指令为中心的回应,会使不同模型的不道德回应生成率额外增加约3-16%。