Safeguarding large language models (LLMs) against unsafe or adversarial behavior is critical as they are increasingly deployed in conversational and agentic settings. Existing moderation tools often treat safety risks (e.g. toxicity, bias) and adversarial threats (e.g. prompt injections, jailbreaks) as separate problems, limiting their robustness and generalizability. We introduce AprielGuard, an 8B parameter safeguard model that unify these dimensions within a single taxonomy and learning framework. AprielGuard is trained on a diverse mix of open and synthetic data covering standalone prompts, multi-turn conversations, and agentic workflows, augmented with structured reasoning traces to improve interpretability. Across multiple public and proprietary benchmarks, AprielGuard achieves strong performance in detecting harmful content and adversarial manipulations, outperforming existing opensource guardrails such as Llama-Guard and Granite Guardian, particularly in multi-step and reasoning intensive scenarios. By releasing the model, we aim to advance transparent and reproducible research on reliable safeguards for LLMs.
翻译:随着大语言模型(LLM)在对话式与智能体场景中的日益广泛应用,保护其免受不安全或对抗性行为的侵害至关重要。现有的审核工具通常将安全风险(如毒性、偏见)与对抗性威胁(如提示注入、越狱攻击)视为独立问题,这限制了其鲁棒性与泛化能力。本文提出AprielGuard,这是一个拥有80亿参数的安全防护模型,通过统一的分类体系与学习框架整合上述维度。该模型基于涵盖独立提示、多轮对话及智能体工作流的多样化开放与合成数据进行训练,并通过结构化推理轨迹增强以提高可解释性。在多个公开与专有基准测试中,AprielGuard在有害内容检测与对抗性操纵识别方面均表现出色,尤其在多步骤与强推理场景中,其性能优于Llama-Guard、Granite Guardian等现有开源护栏方案。我们通过开源此模型,旨在推动面向LLM的可信防护机制的透明化与可复现研究。