We present MAESTRO, an evaluation suite for the testing, reliability, and observability of LLM-based MAS. MAESTRO standardizes MAS configuration and execution through a unified interface, supports integrating both native and third-party MAS via a repository of examples and lightweight adapters, and exports framework-agnostic execution traces together with system-level signals (e.g., latency, cost, and failures). We instantiate MAESTRO with 12 representative MAS spanning popular agentic frameworks and interaction patterns, and conduct controlled experiments across repeated runs, backend models, and tool configurations. Our case studies show that MAS executions can be structurally stable yet temporally variable, leading to substantial run-to-run variance in performance and reliability. We further find that MAS architecture is the dominant driver of resource profiles, reproducibility, and cost-latency-accuracy trade-off, often outweighing changes in backend models or tool settings. Overall, MAESTRO enables systematic evaluation and provides empirical guidance for designing and optimizing agentic systems.
翻译:我们提出MAESTRO,一个用于评估基于大语言模型的多智能体系统(MAS)的测试、可靠性与可观测性的评估套件。MAESTRO通过统一接口标准化MAS的配置与执行,支持通过示例库与轻量适配器集成原生及第三方MAS,并导出与框架无关的执行轨迹及系统级信号(如延迟、成本与故障)。我们基于12个涵盖主流智能体框架与交互模式的代表性MAS对MAESTRO进行实例化,并在重复运行、后端模型及工具配置等维度开展受控实验。案例研究表明,MAS执行过程可能呈现结构稳定但时序波动的特征,导致性能与可靠性存在显著的运行间差异。我们进一步发现,MAS架构是资源分布、可复现性及成本-延迟-准确度权衡的主导因素,其影响通常超过后端模型或工具设置的变更。总体而言,MAESTRO为智能体系统的设计与优化提供了系统化评估方法与实证指导。