Large language models (LLMs) have demonstrated strong potential and impressive performance in automating the generation and optimization of workflows. However, existing approaches are marked by limited reasoning capabilities, high computational demands, and significant resource requirements. To address these issues, we propose DebFlow, a framework that employs a debate mechanism to optimize workflows and integrates reflexion to improve based on previous experiences. We evaluated our method across six benchmark datasets, including HotpotQA, MATH, and ALFWorld. Our approach achieved a 3\% average performance improvement over the latest baselines, demonstrating its effectiveness in diverse problem domains. In particular, during training, our framework reduces resource consumption by 37\% compared to the state-of-the-art baselines. Additionally, we performed ablation studies. Removing the Debate component resulted in a 4\% performance drop across two benchmark datasets, significantly greater than the 2\% drop observed when the Reflection component was removed. These findings strongly demonstrate the critical role of Debate in enhancing framework performance, while also highlighting the auxiliary contribution of reflexion to overall optimization.
翻译:大型语言模型(LLM)在自动化生成和优化工作流方面展现出巨大潜力与卓越性能。然而,现有方法普遍存在推理能力有限、计算需求高、资源消耗大等问题。为解决这些挑战,本文提出DebFlow框架,该框架采用辩论机制优化工作流,并集成反思模块以实现基于历史经验的持续改进。我们在六个基准数据集(包括HotpotQA、MATH和ALFWorld)上评估了所提方法。实验结果表明,相较于最新基线方法,本框架实现了平均3%的性能提升,验证了其在多样化问题领域的有效性。特别值得注意的是,在训练过程中,本框架相比最先进的基线方法降低了37%的资源消耗。此外,我们进行了消融实验:移除辩论组件导致在两个基准数据集上出现4%的性能下降,显著高于移除反思组件时观察到的2%性能下降。这些发现有力证明了辩论机制对提升框架性能的关键作用,同时凸显了反思模块对整体优化过程的辅助贡献。