Agentic AI systems built around large language models (LLMs) are moving away from closed, single-model frameworks and toward open ecosystems that connect a variety of agents, external tools, and resources. The Model Context Protocol (MCP) has emerged as a standard to unify tool access, allowing agents to discover, invoke, and coordinate with tools more flexibly. However, as MCP becomes more widely adopted, it also brings a new set of security and privacy challenges. These include risks such as unauthorized access, tool poisoning, prompt injection, privilege escalation, and supply chain attacks, any of which can impact different parts of the protocol workflow. While recent research has examined possible attack surfaces and suggested targeted countermeasures, there is still a lack of systematic, protocol-level security improvements for MCP. To address this, we introduce the Secure Model Context Protocol (SMCP), which builds on MCP by adding unified identity management, robust mutual authentication, ongoing security context propagation, fine-grained policy enforcement, and comprehensive audit logging. In this paper, we present the main components of SMCP, explain how it helps reduce security risks, and illustrate its application with practical examples. We hope that this work will contribute to the development of agentic systems that are not only powerful and adaptable, but also secure and dependable.
翻译:围绕大型语言模型(LLM)构建的智能体人工智能系统正从封闭的单模型框架转向连接多种智能体、外部工具与资源的开放生态系统。模型上下文协议(MCP)已成为统一工具访问的标准,使智能体能够更灵活地发现、调用和协调工具。然而,随着MCP的广泛采用,它也带来了一系列新的安全与隐私挑战,包括未授权访问、工具投毒、提示注入、权限提升和供应链攻击等风险,这些风险可能影响协议工作流程的不同环节。尽管近期研究已考察了可能的攻击面并提出了针对性防御措施,但目前仍缺乏针对MCP的系统性协议级安全改进。为此,我们提出安全模型上下文协议(SMCP),该协议在MCP基础上增加了统一身份管理、强健的双向认证、持续的安全上下文传播、细粒度策略执行以及全面的审计日志功能。本文介绍了SMCP的核心组件,阐述了其如何帮助降低安全风险,并通过实际案例说明了其应用。我们希望这项工作能够推动智能体系统向不仅强大、适应性强,而且安全可靠的方向发展。