Process models are frequently used in software engineering to describe business requirements, guide software testing and control system improvement. However, traditional process modeling methods often require the participation of numerous experts, which is expensive and time-consuming. Therefore, the exploration of a more efficient and cost-effective automated modeling method has emerged as a focal point in current research. This article explores a framework for automatically generating process models with multi-agent orchestration (MAO), aiming to enhance the efficiency of process modeling and offer valuable insights for domain experts. Our framework MAO leverages large language models as the cornerstone for multi-agent, employing an innovative prompt strategy to ensure efficient collaboration among multi-agent. Specifically, 1) generation. The first phase of MAO is to generate a slightly rough process model from the text description; 2) refinement. The agents would continuously refine the initial process model through multiple rounds of dialogue; 3) reviewing. Large language models are prone to hallucination phenomena among multi-turn dialogues, so the agents need to review and repair semantic hallucinations in process models; 4) testing. The representation of process models is diverse. Consequently, the agents utilize external tools to test whether the generated process model contains format errors, namely format hallucinations, and then adjust the process model to conform to the output paradigm. The experiments demonstrate that the process models generated by our framework outperform existing methods and surpass manual modeling by 89%, 61%, 52%, and 75% on four different datasets, respectively.
翻译:流程模型在软件工程中常用于描述业务需求、指导软件测试和控制系统改进。然而,传统的流程建模方法通常需要大量专家参与,成本高昂且耗时。因此,探索更高效、更具成本效益的自动化建模方法已成为当前研究的焦点。本文探讨了一种基于多智能体编排(MAO)自动生成流程模型的框架,旨在提高流程建模效率,并为领域专家提供有价值的见解。我们的框架MAO以大型语言模型作为多智能体的基石,采用创新的提示策略确保多智能体间的高效协作。具体而言:1)生成阶段。MAO的第一阶段是从文本描述生成初步的流程模型;2)精化阶段。智能体通过多轮对话持续优化初始流程模型;3)审查阶段。大型语言模型在多轮对话中容易出现幻觉现象,因此智能体需审查并修复流程模型中的语义幻觉;4)测试阶段。流程模型的表示形式多样,智能体利用外部工具测试生成的流程模型是否存在格式错误(即格式幻觉),随后调整流程模型以符合输出范式。实验表明,本框架生成的流程模型优于现有方法,在四个不同数据集上分别超越人工建模89%、61%、52%和75%。