As foundation models (FMs) play an increasingly prominent role in complex software systems, such as agentic software, they introduce significant observability and debuggability challenges. Although recent Large Reasoning Models (LRMs) generate their thought processes as part of the output, in many scenarios fast-thinking Large Language Models (LLMs) are still preferred due to latency constraints. LLM-powered agents operate autonomously with opaque implicit reasoning, making it difficult to debug their unexpected behaviors or errors. In this paper, we introduce Watson, a novel framework that provides reasoning observability into the implicit reasoning processes of agents driven by fast-thinking LLMs, allowing the identification and localization of errors and guidance for corrections. We demonstrate the accuracy of the recovered implicit reasoning trace by Watson and its usefulness through debugging and improving the performance of LLM-powered agents in two scenarios: Massive Multitask Language Understanding (MMLU) benchmark and SWE-bench-lite. Using Watson, we were able to observe and identify the implicit reasoning errors, and automatically provide targeted corrections at runtime that improve the Pass@1 of agents on MMLU and SWE-bench-lite by 7.58 (13.45% relative improvement) and 7.76 (12.31% relative improvement) percentage points, respectively, without updates to models or the cognitive architecture of the agents.
翻译:随着基础模型在复杂软件系统(如智能体软件)中扮演日益重要的角色,其引入了显著的可观测性与可调试性挑战。尽管近期的大型推理模型能够将思维过程作为输出的一部分生成,但在许多场景中,由于延迟限制,快速思考的大型语言模型仍然是更优选择。由LLM驱动的智能体以不透明的隐式推理过程自主运行,这使得调试其意外行为或错误变得困难。本文提出Watson,一种新颖的框架,可为基于快速思考LLM驱动的智能体提供对其隐式推理过程的推理可观测性,从而支持错误识别与定位,并为修正提供指导。我们通过两个场景——大规模多任务语言理解基准测试与SWE-bench-lite——展示了Watson所恢复的隐式推理轨迹的准确性及其在调试与提升LLM驱动智能体性能方面的实用性。借助Watson,我们能够观测并识别隐式推理错误,并在运行时自动提供针对性修正,使得智能体在MMLU和SWE-bench-lite上的Pass@1分别提升了7.58个百分点(相对提升13.45%)和7.76个百分点(相对提升12.31%),且无需更新模型或智能体的认知架构。