As large language models evolve into tool-augmented agents, a central question remains unresolved: when is external tool use actually justified? Existing agent frameworks typically treat tools as ordinary actions and optimize for task success or reward, offering little principled distinction between epistemically necessary interaction and unnecessary delegation. This position paper argues that agents should invoke external tools only when epistemically necessary. Here, epistemic necessity means that a task cannot be completed reliably via the agent's internal reasoning over its current context, without any external interaction. We introduce the Theory of Agent (ToA), a framework that treats agents as making sequential decisions about whether remaining uncertainty should be resolved internally or delegated externally. From this perspective, common agent failure modes (e.g., overthinking and overacting) arise from miscalibrated decisions under uncertainty rather than deficiencies in reasoning or tool execution alone. We further discuss implications for training, evaluation, and agent design, highlighting that unnecessary delegation not only causes inefficiency but can impede the development of internal reasoning capability. Our position provides a normative criterion for tool use that complements existing decision-theoretic models and is essential for building agents that are not only correct, but increasingly intelligent.
翻译:随着大语言模型演化为工具增强型智能体,一个核心问题仍未解决:外部工具的使用何时具有正当性?现有智能体框架通常将工具视为普通动作,并以任务成功率或奖励为优化目标,未能从原理上区分认知必需的交互与不必要的委托。本立场论文主张:智能体仅应在认知必需时调用外部工具。此处认知必需性指:在不依赖任何外部交互的情况下,智能体无法通过基于当前情境的内部推理可靠完成任务。我们提出智能体理论(Theory of Agent, ToA),该框架将智能体视为持续决策主体,需判断剩余不确定性应通过内部处理还是外部委托来解决。基于此视角,常见的智能体失效模式(如过度思考与过度执行)源于不确定性下的决策失准,而非单纯由推理能力或工具执行缺陷导致。我们进一步探讨其对训练、评估与智能体设计的启示,强调不必要的委托不仅导致效率低下,更可能阻碍内部推理能力的发展。本立场为工具使用提供了规范性准则,既是对现有决策理论模型的补充,也对构建不仅正确且日益智能的智能体至关重要。