The rapid development of Large Language Models (LLMs) has led to a surge in applications that facilitate collaboration among multiple agents, assisting humans in their daily tasks. However, a significant gap remains in assessing to what extent LLM-powered applications genuinely enhance user experience and task execution efficiency. This highlights the need to verify utility of LLM-powered applications, particularly by ensuring alignment between the application's functionality and end-user needs. We introduce AgentEval, a novel framework designed to simplify the utility verification process by automatically proposing a set of criteria tailored to the unique purpose of any given application. This allows for a comprehensive assessment, quantifying the utility of an application against the suggested criteria. We present a comprehensive analysis of the effectiveness and robustness of AgentEval for two open source datasets including Math Problem solving and ALFWorld House-hold related tasks. For reproducibility purposes, we make the data, code and all the logs publicly available at https://bit.ly/3w3yKcS .
翻译:大型语言模型(LLM)的快速发展催生了大量支持多智能体协作的应用程序,帮助人类完成日常任务。然而,在评估LLM驱动应用如何真正提升用户体验与任务执行效率方面仍存在显著空白。这凸显了验证LLM驱动应用效用性的必要性,尤其是确保应用功能与终端用户需求的一致性。我们提出AgentEval这一新型框架,旨在通过自动提出针对特定应用目标的定制化评估标准集,简化效用验证流程。该框架支持基于建议标准对应用效用量化评估的全面分析。我们基于数学问题求解与ALFWorld家庭相关任务两个开源数据集,对AgentEval的有效性与鲁棒性进行了全面分析。为保障可复现性,我们在https://bit.ly/3w3yKcS 公开了所有数据、代码与日志记录。