Most prior safety research of large language models (LLMs) has focused on enhancing the alignment of LLMs to better suit the safety requirements of humans. However, internalizing such safeguard features into larger models brought challenges of higher training cost and unintended degradation of helpfulness. To overcome such challenges, a modular approach employing a smaller LLM to detect harmful user queries is regarded as a convenient solution in designing LLM-based system with safety requirements. In this paper, we leverage a smaller LLM for both harmful query detection and safeguard response generation. We introduce our safety requirements and the taxonomy of harmfulness categories, and then propose a multi-task learning mechanism fusing the two tasks into a single model. We demonstrate the effectiveness of our approach, providing on par or surpassing harmful query detection and safeguard response performance compared to the publicly available LLMs.
翻译:以往关于大语言模型(LLM)安全性的研究大多集中于增强LLM的对齐性,以使其更符合人类的安全需求。然而,将此类安全防护功能内置于大型模型之中,带来了训练成本高昂以及可能无意中削弱模型辅助能力等挑战。为克服这些挑战,在构建具有安全需求的LLM系统时,采用模块化方案,即利用一个较小的LLM来检测有害的用户查询,被视为一种便捷的解决方案。本文中,我们利用一个较小的LLM同时执行有害查询检测和安全防护响应生成两项任务。我们介绍了所采用的安全需求与有害性分类体系,并提出一种多任务学习机制,将这两项任务融合至单一模型中。实验结果表明,我们的方法在有害查询检测与安全防护响应生成方面的性能,与公开可用的LLM相比达到相当甚至更优的水平。