The safety of Large Language Models (LLMs) has gained increasing attention in recent years, but there still lacks a comprehensive approach for detecting safety issues within LLMs' responses in an aligned, customizable and explainable manner. In this paper, we propose ShieldLM, an LLM-based safety detector, which aligns with common safety standards, supports customizable detection rules, and provides explanations for its decisions. To train ShieldLM, we compile a large bilingual dataset comprising 14,387 query-response pairs, annotating the safety of responses based on various safety standards. Through extensive experiments, we demonstrate that ShieldLM surpasses strong baselines across four test sets, showcasing remarkable customizability and explainability. Besides performing well on standard detection datasets, ShieldLM has also been shown to be effective as a safety evaluator for advanced LLMs. ShieldLM is released at \url{https://github.com/thu-coai/ShieldLM} to support accurate and explainable safety detection under various safety standards.
翻译:近年来,大型语言模型(LLMs)的安全性日益受到关注,但目前仍缺乏一种全面、对齐、可定制且可解释的方法来检测LLMs响应中的安全问题。本文提出ShieldLM,一种基于LLM的安全检测器,其与通用安全标准对齐,支持可定制的检测规则,并为检测决策提供解释。为训练ShieldLM,我们构建了一个包含14,387个查询-响应对的大规模双语数据集,依据多种安全标准对响应的安全性进行了标注。通过大量实验,我们证明ShieldLM在四个测试集上均超越了强基线模型,并展现出卓越的可定制性与可解释性。除了在标准检测数据集上表现优异外,ShieldLM还被证明可作为先进LLMs的有效安全评估器。ShieldLM已发布于\url{https://github.com/thu-coai/ShieldLM},以支持在不同安全标准下进行准确且可解释的安全检测。