Large Language Models (LLMs) have significantly advanced natural language processing (NLP) tasks but also pose ethical and societal risks due to their propensity to generate harmful content. To address this, various approaches have been developed to safeguard LLMs from producing unsafe content. However, existing methods have limitations, including the need for training specific control models and proactive intervention during text generation, that lead to quality degradation and increased computational overhead. To mitigate those limitations, we propose LLMSafeGuard, a lightweight framework to safeguard LLM text generation in real-time. LLMSafeGuard integrates an external validator into the beam search algorithm during decoding, rejecting candidates that violate safety constraints while allowing valid ones to proceed. We introduce a similarity based validation approach, simplifying constraint introduction and eliminating the need for control model training. Additionally, LLMSafeGuard employs a context-wise timing selection strategy, intervening LLMs only when necessary. We evaluate LLMSafeGuard on two tasks, detoxification and copyright safeguarding, and demonstrate its superior performance over SOTA baselines. For instance, LLMSafeGuard reduces the average toxic score of. LLM output by 29.7% compared to the best baseline meanwhile preserving similar linguistic quality as natural output in detoxification task. Similarly, in the copyright task, LLMSafeGuard decreases the Longest Common Subsequence (LCS) by 56.2% compared to baselines. Moreover, our context-wise timing selection strategy reduces inference time by at least 24% meanwhile maintaining comparable effectiveness as validating each time step. LLMSafeGuard also offers tunable parameters to balance its effectiveness and efficiency.
翻译:大语言模型(LLMs)显著推进了自然语言处理(NLP)任务的发展,但其生成有害内容的倾向也带来了伦理和社会风险。为应对这一问题,研究者开发了多种方法以保护LLMs不生成不安全内容。然而现有方法存在局限性,包括需训练特定控制模型以及在文本生成过程中进行主动干预,这会导致质量下降并增加计算开销。为缓解这些局限,我们提出LLMSafeGuard——一种轻量级框架,可实时保护LLM文本生成。LLMSafeGuard在解码阶段的束搜索算法中集成外部验证器,拒绝违反安全约束的候选结果,同时允许有效结果继续生成。我们引入基于相似度的验证方法,简化约束引入过程,无需训练控制模型。此外,LLMSafeGuard采用上下文感知的时序选择策略,仅在必要时干预LLM。我们在两项任务(去毒化和版权保护)上评估LLMSafeGuard,证明其性能优于最先进的基线方法。例如,在去毒化任务中,与最佳基线相比,LLMSafeGuard将LLM输出的平均毒性评分降低29.7%,同时保持与自然输出相近的语言质量。类似地,在版权保护任务中,与基线方法相比,LLMSafeGuard将最长公共子序列(LCS)降低56.2%。此外,我们的上下文感知时序选择策略在保持与每步验证相当的有效性的同时,将推理时间减少至少24%。LLMSafeGuard还提供可调参数,以平衡其效果与效率。