Large Language Models have emerged as transformative tools for Security Operations Centers, enabling automated log analysis, phishing triage, and malware explanation; however, deployment in adversarial cybersecurity environments exposes critical vulnerabilities to prompt injection attacks where malicious instructions embedded in security artifacts manipulate model behavior. This paper introduces SecureCAI, a novel defense framework extending Constitutional AI principles with security-aware guardrails, adaptive constitution evolution, and Direct Preference Optimization for unlearning unsafe response patterns, addressing the unique challenges of high-stakes security contexts where traditional safety mechanisms prove insufficient against sophisticated adversarial manipulation. Experimental evaluation demonstrates that SecureCAI reduces attack success rates by 94.7% compared to baseline models while maintaining 95.1% accuracy on benign security analysis tasks, with the framework incorporating continuous red-teaming feedback loops enabling dynamic adaptation to emerging attack strategies and achieving constitution adherence scores exceeding 0.92 under sustained adversarial pressure, thereby establishing a foundation for trustworthy integration of language model capabilities into operational cybersecurity workflows and addressing a critical gap in current approaches to AI safety within adversarial domains.
翻译:大语言模型已成为安全运营中心的变革性工具,能够实现自动化日志分析、钓鱼邮件分级和恶意软件解释;然而,在对抗性网络安全环境中的部署暴露了其对提示注入攻击的关键脆弱性,此类攻击通过嵌入在安全工件中的恶意指令来操纵模型行为。本文提出SecureCAI,一种新颖的防御框架,该框架通过安全感知护栏、自适应宪法演化以及用于消除不安全响应模式的直接偏好优化,扩展了宪法人工智能原则,以应对高风险安全场景中的独特挑战——在该场景中,传统安全机制被证明不足以抵御复杂的对抗性操控。实验评估表明,与基线模型相比,SecureCAI将攻击成功率降低了94.7%,同时在良性安全分析任务上保持了95.1%的准确率;该框架整合了持续红队反馈循环,使其能够动态适应新出现的攻击策略,并在持续的对抗压力下实现超过0.92的宪法遵循分数,从而为将语言模型能力可信地集成到运营性网络安全工作流奠定了基础,并弥补了当前对抗性领域中人工智能安全方法的一个关键空白。