Persona agents, which are LLM agents that act according to an assigned persona, have demonstrated impressive contextual response capabilities across various applications. These persona agents offer significant enhancements across diverse sectors, such as education, healthcare, and entertainment, where model developers can align agent responses to different user requirements thereby broadening the scope of agent applications. However, evaluating persona agent performance is incredibly challenging due to the complexity of assessing persona adherence in free-form interactions across various environments that are relevant to each persona agent. We introduce PersonaGym, the first dynamic evaluation framework for assessing persona agents, and PersonaScore, the first automated human-aligned metric grounded in decision theory for comprehensive large-scale evaluation of persona agents. Our evaluation of 6 open and closed-source LLMs, using a benchmark encompassing 200 personas and 10,000 questions, reveals significant opportunities for advancement in persona agent capabilities across state-of-the-art models. For example, Claude 3.5 Sonnet only has a 2.97% relative improvement in PersonaScore than GPT 3.5 despite being a much more advanced model. Importantly, we find that increased model size and complexity do not necessarily imply enhanced persona agent capabilities thereby highlighting the pressing need for algorithmic and architectural invention towards faithful and performant persona agents.
翻译:角色智能体,即根据指定角色行事的LLM智能体,已在多种应用中展现出卓越的上下文响应能力。此类智能体在教育、医疗、娱乐等不同领域均能带来显著的功能增强,模型开发者可通过调整智能体响应以适应多样化的用户需求,从而拓宽智能体的应用范围。然而,评估角色智能体的性能极具挑战性,原因在于难以衡量其在自由形式的交互中,于各类与特定角色相关的环境中保持角色一致性的复杂程度。本文提出PersonaGym——首个用于评估角色智能体的动态评估框架,以及PersonaScore——首个基于决策理论、符合人类评判标准的自动化度量指标,用于对角色智能体进行大规模综合评估。我们通过对6个开源与闭源LLM的评估(涵盖200种角色与10,000个问题的基准测试),揭示了当前最先进模型在角色智能体能力方面仍存在显著的提升空间。例如,尽管Claude 3.5 Sonnet是更为先进的模型,其PersonaScore仅比GPT 3.5相对提升2.97%。重要的是,我们发现模型规模与复杂度的增加并不必然带来角色智能体能力的增强,这凸显了亟需通过算法与架构创新来开发忠实且高效的角色智能体。