In recent years, conversational large language models (LLMs) have shown tremendous success in tasks such as casual conversation, question answering, and personalized dialogue, making significant advancements in domains like virtual assistance, social interaction, and online customer engagement. However, they often generate responses that are not aligned with human values (e.g., ethical standards, safety, or social norms), leading to potentially unsafe or inappropriate outputs. While several techniques have been proposed to address this problem, they come with a cost, requiring computationally expensive training or dramatically increasing the inference time. In this paper, we present DIESEL, a lightweight inference guidance technique that can be seamlessly integrated into any autoregressive LLM to semantically filter undesired concepts from the response. DIESEL can function either as a standalone safeguard or as an additional layer of defense, enhancing response safety by reranking the LLM's proposed tokens based on their similarity to predefined negative concepts in the latent space. This approach provides an efficient and effective solution for maintaining alignment with human values. Our evaluation demonstrates DIESEL's effectiveness on state-of-the-art conversational models (e.g., Llama 3), even in challenging jailbreaking scenarios that test the limits of response safety. We further show that DIESEL can be generalized to use cases other than safety, providing a versatile solution for general-purpose response filtering with minimal computational overhead.
翻译:近年来,对话式大语言模型(LLMs)在闲聊对话、问答系统和个性化对话等任务中取得了显著成功,在虚拟助手、社交互动和在线客户互动等领域实现了重大进展。然而,这些模型生成的响应常常与人类价值观(例如伦理标准、安全性或社会规范)不一致,可能导致不安全或不恰当的输出。虽然已有多种技术被提出以解决此问题,但这些方法往往需要昂贵的计算训练成本或显著增加推理时间。本文提出DIESEL,一种轻量级推理引导技术,可无缝集成到任何自回归大语言模型中,从语义层面过滤响应中的不良概念。DIESEL既可作为独立的安全防护机制,也可作为额外的防御层,通过根据潜在空间中预定义负面概念的相似度对模型生成的候选词元进行重排序,从而提升响应安全性。该方法为保持与人类价值观对齐提供了高效且有效的解决方案。我们的评估表明,即使在测试响应安全极限的复杂越狱场景中,DIESEL对最先进的对话模型(如Llama 3)仍具有显著效果。我们进一步证明DIESEL可推广至安全领域之外的应用场景,为通用响应过滤提供了计算开销极低的多功能解决方案。