The integration of Large Language Models (LLMs) into network operations (AIOps) is hindered by two fundamental challenges: the stochastic grounding problem, where LLMs struggle to reliably parse unstructured, vendor-specific CLI output, and the security gap of granting autonomous agents shell access. This paper introduces MCP-Diag, a hybrid neuro-symbolic architecture built upon the Model Context Protocol (MCP). We propose a deterministic translation layer that converts raw stdout from canonical utilities (dig, ping, traceroute) into rigorous JSON schemas before AI ingestion. We further introduce a mandatory "Elicitation Loop" that enforces Human-in-the-Loop (HITL) authorization at the protocol level. Our preliminary evaluation demonstrates that MCP-Diag achieving 100% entity extraction accuracy with less than 0.9% execution latency overhead and 3.7x increase in context token usage.
翻译:将大型语言模型(LLM)集成到网络运维(AIOps)中面临两个基本挑战:一是随机接地问题,即LLM难以可靠地解析非结构化的、供应商特定的CLI输出;二是授予自主代理Shell访问权限所带来的安全鸿沟。本文介绍了MCP-Diag,一种基于模型上下文协议(MCP)构建的混合神经符号架构。我们提出了一种确定性翻译层,在AI摄取之前,将来自标准工具(dig, ping, traceroute)的原始标准输出转换为严格的JSON模式。我们进一步引入了一个强制性的"启发循环",在协议层面强制执行人在回路(HITL)授权。我们的初步评估表明,MCP-Diag实现了100%的实体提取准确率,执行延迟开销低于0.9%,上下文令牌使用量增加了3.7倍。