The security of Large Language Model (LLM) applications is fundamentally challenged by "form-first" attacks like prompt injection and jailbreaking, where malicious instructions are embedded within user inputs. Conventional defenses, which rely on post hoc output filtering, are often brittle and fail to address the root cause: the model's inability to distinguish trusted instructions from untrusted data. This paper proposes Countermind, a multi-layered security architecture intended to shift defenses from a reactive, post hoc posture to a proactive, pre-inference, and intra-inference enforcement model. The architecture proposes a fortified perimeter designed to structurally validate and transform all inputs, and an internal governance mechanism intended to constrain the model's semantic processing pathways before an output is generated. The primary contributions of this work are conceptual designs for: (1) A Semantic Boundary Logic (SBL) with a mandatory, time-coupled Text Crypter intended to reduce the plaintext prompt injection attack surface, provided all ingestion paths are enforced. (2) A Parameter-Space Restriction (PSR) mechanism, leveraging principles from representation engineering, to dynamically control the LLM's access to internal semantic clusters, with the goal of mitigating semantic drift and dangerous emergent behaviors. (3) A Secure, Self-Regulating Core that uses an OODA loop and a learning security module to adapt its defenses based on an immutable audit log. (4) A Multimodal Input Sandbox and Context-Defense mechanisms to address threats from non-textual data and long-term semantic poisoning. This paper outlines an evaluation plan designed to quantify the proposed architecture's effectiveness in reducing the Attack Success Rate (ASR) for form-first attacks and to measure its potential latency overhead.
翻译:大型语言模型(LLM)应用的安全性正面临"形式优先"攻击(如提示注入和越狱攻击)的根本性挑战,此类攻击将恶意指令嵌入用户输入中。依赖事后输出过滤的传统防御机制往往脆弱,且未能解决根本问题:模型无法区分可信指令与不可信数据。本文提出Countermind,一种多层安全架构,旨在将防御模式从被动的、事后处置转变为主动的、推理前及推理中的执行模型。该架构设计了强化边界层以结构化验证和转换所有输入,并建立内部治理机制以在输出生成前约束模型的语义处理路径。本工作的核心贡献包括以下概念设计:(1)语义边界逻辑(SBL),配备强制性的时间耦合文本加密器,旨在减少明文提示注入攻击面(需确保所有输入路径均受控)。(2)参数空间限制(PSR)机制,借鉴表征工程原理,动态控制LLM对内部语义簇的访问,以缓解语义漂移和危险涌现行为。(3)安全自调节核心,采用OODA循环和学习安全模块,基于不可变审计日志自适应调整防御策略。(4)多模态输入沙箱与上下文防御机制,用于应对非文本数据及长期语义投毒威胁。本文制定了评估方案,旨在量化所提架构对降低"形式优先"攻击成功率(ASR)的有效性,并测量其潜在延迟开销。