Large language model (LLM)-based agents are increasingly used to perform complex, multi-step workflows in regulated settings such as compliance and due diligence. However, many agentic architectures rely primarily on prompt engineering of a single agent, making it difficult to observe or compare how models handle uncertainty and coordination across interconnected decision stages and with human oversight. We introduce a multi-agent system formalized as a finite-horizon Markov Decision Process (MDP) with a directed acyclic structure. Each agent corresponds to a specific role or decision stage (e.g., content, business, or legal review in a compliance workflow), with predefined transitions representing task escalation or completion. Epistemic uncertainty is quantified at the agent level using Monte Carlo estimation, while system-level uncertainty is captured by the MDP's termination in either an automated labeled state or a human-review state. We illustrate the approach through a case study in AI safety evaluation for self-harm detection, implemented as a multi-agent compliance system. Results demonstrate improvements over a single-agent baseline, including up to a 19\% increase in accuracy, up to an 85x reduction in required human review, and, in some configurations, reduced processing time.
翻译:基于大语言模型(LLM)的智能体日益被用于在合规与尽职调查等受监管场景中执行复杂的多步骤工作流。然而,许多智能体架构主要依赖于对单一智能体的提示工程,这使得观察或比较模型如何在相互关联的决策阶段以及人类监督下处理不确定性与协调变得困难。我们提出了一种形式化为具有有向无环结构的有限时域马尔可夫决策过程(MDP)的多智能体系统。每个智能体对应一个特定角色或决策阶段(例如合规工作流中的内容、业务或法律审查),并具有代表任务升级或完成的预定义转移。认知不确定性在智能体层面通过蒙特卡洛估计进行量化,而系统级不确定性则由MDP终止于自动标记状态或人工审核状态来捕捉。我们通过一个AI安全评估案例研究(实现为多智能体合规系统)来阐述该方法,该案例专注于自残检测。结果表明,相较于单智能体基线,该方法实现了多项改进,包括准确率最高提升19%,所需人工审核量最高减少85倍,并且在某些配置下处理时间也有所缩短。