Persona agents, which are LLM agents that act according to an assigned persona, have demonstrated impressive contextual response capabilities across various applications. These persona agents offer significant enhancements across diverse sectors, such as education, healthcare, and entertainment, where model developers can align agent responses to different user requirements thereby broadening the scope of agent applications. However, evaluating persona agent performance is incredibly challenging due to the complexity of assessing persona adherence in free-form interactions across various environments that are relevant to each persona agent. We introduce PersonaGym, the first dynamic evaluation framework for assessing persona agents, and PersonaScore, the first automated human-aligned metric grounded in decision theory for comprehensive large-scale evaluation of persona agents. Our evaluation of 6 open and closed-source LLMs, using a benchmark encompassing 200 personas and 10,000 questions, reveals significant opportunities for advancement in persona agent capabilities across state-of-the-art models. For example, Claude 3.5 Sonnet only has a 2.97% relative improvement in PersonaScore than GPT 3.5 despite being a much more advanced model. Importantly, we find that increased model size and complexity do not necessarily imply enhanced persona agent capabilities thereby highlighting the pressing need for algorithmic and architectural invention towards faithful and performant persona agents.
翻译:角色代理,即根据指定角色行事的LLM代理,已在多种应用中展现出令人印象深刻的上下文响应能力。这些角色代理在教育、医疗保健和娱乐等不同领域提供了显著增强,模型开发者能够根据不同的用户需求调整代理响应,从而拓宽了代理的应用范围。然而,评估角色代理的性能极具挑战性,原因在于难以评估其在各种与特定角色代理相关的环境中,于自由形式交互中对角色遵循度的复杂性。我们提出了PersonaGym,首个用于评估角色代理的动态评估框架,以及PersonaScore,首个基于决策理论的自动化人类对齐指标,用于对角色代理进行全面的大规模评估。我们使用包含200个角色和10,000个问题的基准测试,对6个开源和闭源LLM进行了评估,结果表明在最先进的模型中,角色代理能力仍有显著的提升空间。例如,尽管Claude 3.5 Sonnet是一个先进得多的模型,但其PersonaScore仅比GPT 3.5相对提高了2.97%。重要的是,我们发现模型规模和复杂性的增加并不必然意味着角色代理能力的增强,这凸显了在算法和架构上进行创新以开发忠实且高性能的角色代理的迫切需求。